JustPaste.it

Thu Dec  1 19:13:20 IST 2016 Starting regionserver on hscale-dev1-dn1
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 62057
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 32000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16000
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
2016-12-01 19:13:21,577 INFO  [main] util.VersionInfo: HBase 1.1.2.2.4.2.0-258
2016-12-01 19:13:21,577 INFO  [main] util.VersionInfo: Source code repository file:///grid/0/jenkins/workspace/HDP-build-centos6/bigtop/build/hbase/rpm/BUILD/hbase-1.1.2.2.4.2.0 revision=Unknown
2016-12-01 19:13:21,577 INFO  [main] util.VersionInfo: Compiled by jenkins on Mon Apr 25 06:36:21 UTC 2016
2016-12-01 19:13:21,577 INFO  [main] util.VersionInfo: From source with checksum 4f661ee4f9f148ce7bfcad5b0d667c27
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:PATH=/usr/sbin:/sbin:/usr/lib/ambari-server/*:/usr/lib64/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/jdk64/jdk1.8.0_60/bin:/usr/jdk64/jdk1.8.0_60/jre/bin:/root/bin:/var/lib/ambari-agent
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HISTCONTROL=ignoredups
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_PID_DIR=/var/run/hbase
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_REGIONSERVER_OPTS= -Xmn512m -Xms3072m -Xmx3072m  -XX:+HeapDumpOnOutOfMemoryError -XX:MaxDirectMemorySize=2g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:MaxNewSize=4g -XX:InitiatingHeapOccupancyPercent=60 -XX:ParallelGCThreads=24 -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=5000 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=10102 -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_regionserver_jaas.conf
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_CONF_DIR=/usr/hdp/current/hbase-regionserver/conf
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:JRE_HOME=/usr/jdk64/jdk1.8.0_60/jre
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/hbase
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:LD_LIBRARY_PATH=::/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:LOGNAME=hbase
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:PWD=/home/hbase
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:LESSOPEN=||/usr/bin/lesspipe.sh %s
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:SHELL=/bin/bash
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:QTINC=/usr/lib64/qt-3.3/include
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_MASTER_OPTS= -Xms4096m -Xmx4096m  -XX:+HeapDumpOnOutOfMemoryError -XX:MaxDirectMemorySize=2g -XX:+AlwaysPreTouch -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=10101 -Dsplice.spark.enabled=true -Dsplice.spark.app.name=SpliceMachine -Dsplice.spark.master=yarn-client -Dsplice.spark.logConf=true -Dsplice.spark.broadcast.factory=org.apache.spark.broadcast.HttpBroadcastFactory -Dsplice.spark.driver.maxResultSize=1g -Dsplice.spark.driver.memory=1g -Dsplice.spark.dynamicAllocation.enabled=true -Dsplice.spark.dynamicAllocation.executorIdleTimeout=600 -Dsplice.spark.dynamicAllocation.minExecutors=0 -Dsplice.spark.io.compression.lz4.blockSize=32k -Dsplice.spark.kryo.referenceTracking=false -Dsplice.spark.kryo.registrator=com.splicemachine.derby.impl.SpliceSparkKryoRegistrator -Dsplice.spark.kryoserializer.buffer.max=512m -Dsplice.spark.kryoserializer.buffer=4m -Dsplice.spark.locality.wait=100 -Dsplice.spark.scheduler.mode=FAIR -Dsplice.spark.serializer=org.apache.spark.serializer.KryoSerializer -Dsplice.spark.shuffle.compress=false -Dsplice.spark.shuffle.file.buffer=128k -Dsplice.spark.shuffle.memoryFraction=0.7 -Dsplice.spark.shuffle.service.enabled=true -Dsplice.spark.storage.memoryFraction=0.1 -Dsplice.spark.yarn.am.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native -Dsplice.spark.yarn.am.waitTime=10s -Dsplice.spark.yarn.executor.memoryOverhead=2048 -Dsplice.spark.driver.extraJavaOptions=-Dlog4j.configuration=file:/etc/spark/conf/log4j.properties -Dsplice.spark.driver.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native -Dsplice.spark.driver.extraClassPath=/usr/hdp/current/hbase-regionserver/conf:/usr/hdp/current/hbase-regionserver/lib/htrace-core-3.1.0-incubating.jar -Dsplice.spark.executor.extraJavaOptions=-Dlog4j.configuration=file:/etc/spark/conf/log4j.properties -Dsplice.spark.executor.extraLibraryPath=/usr/hdp/current/hadoop-client/lib/native -Dsplice.spark.executor.extraClassPath=/usr/hdp/current/hbase-regionserver/conf:/usr/hdp/current/hbase-regionserver/lib/htrace-core-3.1.0-incubating.jar -Dsplice.spark.ui.retainedJobs=100 -Dsplice.spark.ui.retainedStages=100 -Dsplice.spark.worker.ui.retainedExecutors=100 -Dsplice.spark.worker.ui.retainedDrivers=100 -Dsplice.spark.streaming.ui.retainedBatches=100 -Dsplice.spark.executor.cores=4 -Dsplice.spark.executor.memory=8g -Dspark.compaction.reserved.slots=4 -Dsplice.spark.eventLog.enabled=true -Dsplice.spark.eventLog.dir=hdfs:///user/splice/history -Dsplice.spark.local.dir=/diska/tmp,/diskb/tmp,/diskc/tmp,/diskd/tmp -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_master_jaas.conf
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_MANAGES_ZK=false
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_REGIONSERVERS=/usr/hdp/current/hbase-regionserver/conf/regionservers
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HADOOP_HOME=/usr/hdp/2.4.2.0-258/hadoop
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2016-12-01 19:13:21,860 INFO  [main] util.ServerCommandLine: env:HBASE_OPTS=-Dhdp.version=2.4.2.0-258  -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log -Djava.io.tmpdir=/tmp -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_client_jaas.conf -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201612011913  -Xmn512m -Xms3072m -Xmx3072m  -XX:+HeapDumpOnOutOfMemoryError -XX:MaxDirectMemorySize=2g -XX:+AlwaysPreTouch -XX:+UseG1GC -XX:MaxNewSize=4g -XX:InitiatingHeapOccupancyPercent=60 -XX:ParallelGCThreads=24 -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=5000 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.port=10102 -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_regionserver_jaas.conf  -Dhbase.log.dir=/var/log/hbase -Dhbase.log.file=hbase-hbase-regionserver-hscale-dev1-dn1.log -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Djava.library.path=:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native -Dhbase.security.logger=INFO,RFAS
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_START_FILE=/var/run/hbase/hbase-hbase-regionserver.autorestart
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:SHLVL=3
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:ZOOKEEPER_HOME=/usr/hdp/2.4.2.0-258/zookeeper
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hbase-regionserver-hscale-dev1-dn1.log
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HISTSIZE=1000
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:JAVA_HOME=/usr/jdk64/jdk1.8.0_60
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HDP_VERSION=2.4.2.0-258
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:XFILESEARCHPATH=/usr/dt/app-defaults/%L/Dt
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:LANG=en_US.UTF-8
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:G_BROKEN_FILENAMES=1
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_CLASSPATH=/usr/hdp/2.4.2.0-258/hadoop/conf:/usr/hdp/2.4.2.0-258/hadoop/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/zookeeper/*:/usr/hdp/2.4.2.0-258/zookeeper/lib/*::/opt/splice/default/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/opt/splice/default/lib/db-client-2.0.1.28.jar:/opt/splice/default/lib/db-drda-2.0.1.28.jar:/opt/splice/default/lib/db-engine-2.0.1.28.jar:/opt/splice/default/lib/db-shared-2.0.1.28.jar:/opt/splice/default/lib/db-tools-i18n-2.0.1.28.jar:/opt/splice/default/lib/db-tools-ij-2.0.1.28.jar:/opt/splice/default/lib/disruptor-3.2.1.jar:/opt/splice/default/lib/gson-2.2.2.jar:/opt/splice/default/lib/hbase_pipeline-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_sql-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_storage-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hppc-0.5.2.jar:/opt/splice/default/lib/kryo-2.21.jar:/opt/splice/default/lib/kryo-serializers-0.26.jar:/opt/splice/default/lib/lucene-core-4.3.1.jar:/opt/splice/default/lib/opencsv-2.3.jar:/opt/splice/default/lib/pipeline_api-2.0.1.28.jar:/opt/splice/default/lib/protobuf-java-2.5.0.jar:/opt/splice/default/lib/spark-assembly-hadoop2.7.1.2.4.2.0-258-1.6.2.jar:/opt/splice/default/lib/splice_access_api-2.0.1.28.jar:/opt/splice/default/lib/splice_auth-2.0.1.28.jar:/opt/splice/default/lib/splice_backup-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_colperms-2.0.1.28.jar:/opt/splice/default/lib/splice_ee-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_encoding-2.0.1.28.jar:/opt/splice/default/lib/splice_encryption-2.0.1.28.jar:/opt/splice/default/lib/splice_machine-2.0.1.28.jar:/opt/splice/default/lib/splice_protocol-2.0.1.28.jar:/opt/splice/default/lib/splice_si_api-2.0.1.28.jar:/opt/splice/default/lib/splice_timestamp_api-2.0.1.28.jar:/opt/splice/default/lib/stats-2.0.1.28.jar:/opt/splice/default/lib/super-csv-2.4.0.jar:/opt/splice/default/lib/utilities-2.0.1.28.jar
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:CVS_RSH=ssh
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/var/run/hbase/hbase-hbase-regionserver.znode
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-regionserver-hscale-dev1-dn1
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/var/log/hbase
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:USER=hbase
2016-12-01 19:13:21,861 INFO  [main] util.ServerCommandLine: env:CLASSPATH=/usr/hdp/current/hbase-regionserver/conf:/usr/jdk64/jdk1.8.0_60/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/disruptor-3.3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-procedure-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-procedure.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-resource-bundle-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-resource-bundle.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-rest-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-rest.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jcodings-1.0.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/joni-2.1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.2.4.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-all-4.0.23.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/okhttp-2.4.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/okio-1.4.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-4.8.0-HBase-1.1-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugin-classloader-0.5.0.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.7.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/spymemcached-2.11.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.4.2.0-258/hadoop/conf:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/hadoop/.//*:/usr/hdp/2.4.2.0-258/hadoop-hdfs/./:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/*:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//*:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/*:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/*:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//*::mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.17.jar_bkp:mysql-connector-java.jar:/usr/hdp/2.4.2.0-258/tez/*:/usr/hdp/2.4.2.0-258/tez/lib/*:/usr/hdp/2.4.2.0-258/tez/conf:/usr/hdp/2.4.2.0-258/hadoop/conf:/usr/hdp/2.4.2.0-258/hadoop/*:/usr/hdp/2.4.2.0-258/hadoop/lib/*:/usr/hdp/2.4.2.0-258/zookeeper/*:/usr/hdp/2.4.2.0-258/zookeeper/lib/*::/opt/splice/default/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/opt/splice/default/lib/db-client-2.0.1.28.jar:/opt/splice/default/lib/db-drda-2.0.1.28.jar:/opt/splice/default/lib/db-engine-2.0.1.28.jar:/opt/splice/default/lib/db-shared-2.0.1.28.jar:/opt/splice/default/lib/db-tools-i18n-2.0.1.28.jar:/opt/splice/default/lib/db-tools-ij-2.0.1.28.jar:/opt/splice/default/lib/disruptor-3.2.1.jar:/opt/splice/default/lib/gson-2.2.2.jar:/opt/splice/default/lib/hbase_pipeline-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_sql-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_storage-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hppc-0.5.2.jar:/opt/splice/default/lib/kryo-2.21.jar:/opt/splice/default/lib/kryo-serializers-0.26.jar:/opt/splice/default/lib/lucene-core-4.3.1.jar:/opt/splice/default/lib/opencsv-2.3.jar:/opt/splice/default/lib/pipeline_api-2.0.1.28.jar:/opt/splice/default/lib/protobuf-java-2.5.0.jar:/opt/splice/default/lib/spark-assembly-hadoop2.7.1.2.4.2.0-258-1.6.2.jar:/opt/splice/default/lib/splice_access_api-2.0.1.28.jar:/opt/splice/default/lib/splice_auth-2.0.1.28.jar:/opt/splice/default/lib/splice_backup-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_colperms-2.0.1.28.jar:/opt/splice/default/lib/splice_ee-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_encoding-2.0.1.28.jar:/opt/splice/default/lib/splice_encryption-2.0.1.28.jar:/opt/splice/default/lib/splice_machine-2.0.1.28.jar:/opt/splice/default/lib/splice_protocol-2.0.1.28.jar:/opt/splice/default/lib/splice_si_api-2.0.1.28.jar:/opt/splice/default/lib/splice_timestamp_api-2.0.1.28.jar:/opt/splice/default/lib/stats-2.0.1.28.jar:/opt/splice/default/lib/super-csv-2.4.0.jar:/opt/splice/default/lib/utilities-2.0.1.28.jar
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:SERVER_GC_OPTS=-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:/var/log/hbase/gc.log-201612011913
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:HADOOP_CONF=/usr/hdp/2.4.2.0-258/hadoop/conf
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:HOSTNAME=hscale-dev1-dn1
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:QTDIR=/usr/lib64/qt-3.3
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:NLSPATH=/usr/dt/lib/nls/msg/%L/%N.cat
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:HBASE_HOME=/usr/hdp/current/hbase-regionserver/bin/..
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:QTLIB=/usr/lib64/qt-3.3/lib
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: vmName=Java HotSpot(TM) 64-Bit Server VM, vmVendor=Oracle Corporation, vmVersion=25.60-b23
2016-12-01 19:13:21,862 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_regionserver, -XX:OnOutOfMemoryError=kill -9 %p, -Dhdp.version=2.4.2.0-258, -XX:ErrorFile=/var/log/hbase/hs_err_pid%p.log, -Djava.io.tmpdir=/tmp, -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_client_jaas.conf, -verbose:gc, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -Xloggc:/var/log/hbase/gc.log-201612011913, -Xmn512m, -Xms3072m, -Xmx3072m, -XX:+HeapDumpOnOutOfMemoryError, -XX:MaxDirectMemorySize=2g, -XX:+AlwaysPreTouch, -XX:+UseG1GC, -XX:MaxNewSize=4g, -XX:InitiatingHeapOccupancyPercent=60, -XX:ParallelGCThreads=24, -XX:+ParallelRefProcEnabled, -XX:MaxGCPauseMillis=5000, -Dcom.sun.management.jmxremote.authenticate=false, -Dcom.sun.management.jmxremote.ssl=false, -Dcom.sun.management.jmxremote.port=10102, -Djava.security.auth.login.config=/usr/hdp/current/hbase-regionserver/conf/hbase_regionserver_jaas.conf, -Dhbase.log.dir=/var/log/hbase, -Dhbase.log.file=hbase-hbase-regionserver-hscale-dev1-dn1.log, -Dhbase.home.dir=/usr/hdp/current/hbase-regionserver/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Djava.library.path=:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native, -Dhbase.security.logger=INFO,RFAS]
2016-12-01 19:13:22,071 INFO  [main] regionserver.RSRpcServices: regionserver/hscale-dev1-dn1/10.60.70.11:16020 server-side HConnection retries=50
2016-12-01 19:13:22,148 INFO  [main] ipc.SimpleRpcScheduler: Using deadline as user call queue, count=4
2016-12-01 19:13:22,157 INFO  [main] ipc.RpcServer: regionserver/hscale-dev1-dn1/10.60.70.11:16020: started 10 reader(s).
2016-12-01 19:13:22,190 INFO  [main] impl.MetricsConfig: loaded properties from hadoop-metrics2-hbase.properties
2016-12-01 19:13:22,207 INFO  [main] timeline.HadoopTimelineMetricsSink: Initializing Timeline metrics sink.
2016-12-01 19:13:22,207 INFO  [main] timeline.HadoopTimelineMetricsSink: Identified hostname = hscale-dev1-dn1, serviceName = hbase
2016-12-01 19:13:22,210 INFO  [main] timeline.HadoopTimelineMetricsSink: Collector Uri: http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:13:22,216 INFO  [main] impl.MetricsSinkAdapter: Sink timeline started
2016-12-01 19:13:22,225 INFO  [main] impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
2016-12-01 19:13:22,225 INFO  [main] impl.MetricsSystemImpl: HBase metrics system started
2016-12-01 19:13:22,355 INFO  [main] security.UserGroupInformation: Login successful for user hbase/hscale-dev1-dn1@HSCALE.COM using keytab file /etc/security/keytabs/hbase.service.keytab
2016-12-01 19:13:23,011 INFO  [main] fs.HFileSystem: Added intercepting call to namenode#getBlockLocations so can do block reordering using class class org.apache.hadoop.hbase.fs.HFileSystem$ReorderWALBlocks
2016-12-01 19:13:23,164 INFO  [main] zookeeper.RecoverableZooKeeper: Process identifier=regionserver:16020 connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.6-258--1, built on 04/25/2016 05:22 GMT
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:host.name=hscale-dev1-dn1
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:java.version=1.8.0_60
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:java.home=/usr/jdk64/jdk1.8.0_60/jre
2016-12-01 19:13:23,170 INFO  [main] zookeeper.ZooKeeper: Client environment:java.class.path=/usr/hdp/current/hbase-regionserver/conf:/usr/jdk64/jdk1.8.0_60/lib/tools.jar:/usr/hdp/current/hbase-regionserver/bin/..:/usr/hdp/current/hbase-regionserver/bin/../lib/activation-1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/aopalliance-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/api-util-1.0.0-M20.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/asm-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/avro-1.7.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-1.7.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-cli-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-codec-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-collections-3.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-compress-1.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-configuration-1.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-daemon-1.0.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-digester-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-el-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-httpclient-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-io-2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-lang-2.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-logging-1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math-2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-math3-3.1.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/commons-net-3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-client-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-framework-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/curator-recipes-2.7.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/disruptor-3.3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/findbugs-annotations-1.3.9-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/gson-2.2.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guava-12.0.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/guice-servlet-3.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-annotations.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-client.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-common.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-examples.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop2-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-hadoop-compat.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-it.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-prefix-tree.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-procedure-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-procedure.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-protocol.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-resource-bundle-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-resource-bundle.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-rest-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-rest.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server-1.1.2.2.4.2.0-258-tests.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-shell.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift-1.1.2.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/hbase-thrift.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpclient-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/httpcore-4.2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-2.2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-core-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jackson-xc-1.9.13.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jamon-runtime-2.3.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-compiler-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jasper-runtime-5.5.23.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/javax.inject-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/java-xmlbuilder-0.4.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-api-2.2.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jcodings-1.0.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-client-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-core-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-guice-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-json-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jersey-server-1.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jets3t-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jettison-1.3.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-sslengine-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/joni-2.1.2.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jruby-complete-1.6.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsch-0.1.42.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsp-api-2.1-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/jsr305-1.3.9.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/junit-4.11.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/leveldbjni-all-1.8.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/libthrift-0.9.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/log4j-1.2.17.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/metrics-core-2.2.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-3.2.4.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/netty-all-4.0.23.Final.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ojdbc6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/okhttp-2.4.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/okio-1.4.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/paranamer-2.3.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-4.8.0-HBase-1.1-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/phoenix-server.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/protobuf-java-2.5.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-hbase-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/ranger-plugin-classloader-0.5.0.2.4.2.0-258.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5-6.1.14.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/servlet-api-2.5.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/slf4j-api-1.7.7.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/snappy-java-1.0.4.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/spymemcached-2.11.6.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xercesImpl-2.9.1.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xml-apis-1.3.04.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xmlenc-0.52.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/xz-1.0.jar:/usr/hdp/current/hbase-regionserver/bin/../lib/zookeeper.jar:/usr/hdp/2.4.2.0-258/hadoop/conf:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/zookeeper-3.4.6.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-nfs-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-common-2.7.1.2.4.2.0-258-tests.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-annotations-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-auth-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-azure-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-aws-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-common.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-auth.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-aws.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-azure.jar:/usr/hdp/2.4.2.0-258/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/./:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.2.0-258-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/spark-assembly-hadoop2.7.1.2.4.2.0-258-1.6.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/zookeeper-3.4.6.2.4.2.0-258-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/zookeeper-3.4.6.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-registry-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-client-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-api-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//joda-time-2.9.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-ant-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-auth-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-sls-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-datajoin-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-archives-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//zookeeper-3.4.6.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-extras-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-rumen-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-openstack-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-gridmix-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-streaming-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.2.0-258-tests.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-distcp-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop-mapreduce/.//hadoop-extras.jar::mysql-connector-java-5.1.17.jar:mysql-connector-java-5.1.17.jar_bkp:mysql-connector-java.jar:/usr/hdp/2.4.2.0-258/tez/tez-yarn-timeline-history-with-acls-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-runtime-library-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-common-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-dag-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-api-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-tests-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-examples-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-runtime-internals-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-mapreduce-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-yarn-timeline-history-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-history-parser-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-yarn-timeline-history-with-fs-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/tez-yarn-timeline-cache-plugin-0.7.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.2.0-258/tez/lib/servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-yarn-server-timeline-plugins-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/slf4j-api-1.7.5.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-mapreduce-client-core-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-azure-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-aws-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-yarn-server-web-proxy-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/lib/jersey-client-1.9.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-collections4-4.1.jar:/usr/hdp/2.4.2.0-258/tez/lib/guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/tez/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/tez/lib/jettison-1.3.4.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/tez/lib/jersey-json-1.9.jar:/usr/hdp/2.4.2.0-258/tez/lib/jsr305-2.0.3.jar:/usr/hdp/2.4.2.0-258/tez/lib/commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/tez/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/tez/lib/hadoop-mapreduce-client-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/tez/conf:/usr/hdp/2.4.2.0-258/hadoop/conf:/usr/hdp/2.4.2.0-258/hadoop/hadoop-nfs-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258-tests.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-annotations-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-auth-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-azure-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-aws-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-tests.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-common.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-auth.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-aws.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-annotations.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-azure.jar:/usr/hdp/2.4.2.0-258/hadoop/hadoop-nfs.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/zookeeper-3.4.6.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.2.0-258/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.2.0-258/zookeeper/zookeeper.jar:/usr/hdp/2.4.2.0-258/zookeeper/zookeeper-3.4.6.2.4.2.0-258.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/netty-3.7.0.Final.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/plexus-interpolation-1.11.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/nekohtml-1.9.6.2.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/commons-codec-1.6.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-plugin-registry-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/jline-0.9.94.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-http-2.4.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-http-shared-1.0-beta-6.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/jsoup-1.7.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-artifact-manager-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-profile-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/commons-io-2.2.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/ant-1.8.0.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/log4j-1.2.16.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-model-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-ant-tasks-2.1.3.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-error-diagnostics-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/httpclient-4.2.3.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-api-1.6.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-http-shared4-2.4.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/backport-util-concurrent-3.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/plexus-utils-3.0.8.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-settings-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/slf4j-log4j12-1.6.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/xercesMinimal-1.9.6.2.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/plexus-container-default-1.0-alpha-9-stable-1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/ant-launcher-1.8.0.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-file-1.0-beta-6.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-artifact-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/commons-logging-1.1.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/classworlds-1.1-alpha-2.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-http-lightweight-1.0-beta-6.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/wagon-provider-api-2.4.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-repository-metadata-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/maven-project-2.2.1.jar:/usr/hdp/2.4.2.0-258/zookeeper/lib/httpcore-4.2.3.jar::/opt/splice/default/lib/concurrentlinkedhashmap-lru-1.4.2.jar:/opt/splice/default/lib/db-client-2.0.1.28.jar:/opt/splice/default/lib/db-drda-2.0.1.28.jar:/opt/splice/default/lib/db-engine-2.0.1.28.jar:/opt/splice/default/lib/db-shared-2.0.1.28.jar:/opt/splice/default/lib/db-tools-i18n-2.0.1.28.jar:/opt/splice/default/lib/db-tools-ij-2.0.1.28.jar:/opt/splice/default/lib/disruptor-3.2.1.jar:/opt/splice/default/lib/gson-2.2.2.jar:/opt/splice/default/lib/hbase_pipeline-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_sql-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hbase_storage-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/hppc-0.5.2.jar:/opt/splice/default/lib/kryo-2.21.jar:/opt/splice/default/lib/kryo-serializers-0.26.jar:/opt/splice/default/lib/lucene-core-4.3.1.jar:/opt/splice/default/lib/opencsv-2.3.jar:/opt/splice/default/lib/pipeline_api-2.0.1.28.jar:/opt/splice/default/lib/protobuf-java-2.5.0.jar:/opt/splice/default/lib/spark-assembly-hadoop2.7.1.2.4.2.0-258-1.6.2.jar:/opt/splice/default/lib/splice_access_api-2.0.1.28.jar:/opt/splice/default/lib/splice_auth-2.0.1.28.jar:/opt/splice/default/lib/splice_backup-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_colperms-2.0.1.28.jar:/opt/splice/default/lib/splice_ee-hdp2.4.2-2.0.1.28.jar:/opt/splice/default/lib/splice_encoding-2.0.1.28.jar:/opt/splice/default/lib/splice_encryption-2.0.1.28.jar:/opt/splice/default/lib/splice_machine-2.0.1.28.jar:/opt/splice/default/lib/splice_protocol-2.0.1.28.jar:/opt/splice/default/lib/splice_si_api-2.0.1.28.jar:/opt/splice/default/lib/splice_timestamp_api-2.0.1.28.jar:/opt/splice/default/lib/stats-2.0.1.28.jar:/opt/splice/default/lib/super-csv-2.4.0.jar:/opt/splice/default/lib/utilities-2.0.1.28.jar
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:java.library.path=:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64:/usr/hdp/2.4.2.0-258/hadoop/lib/native
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:os.name=Linux
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:os.arch=amd64
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:os.version=2.6.32-504.el6.x86_64
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:user.name=hbase
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:user.home=/home/hbase
2016-12-01 19:13:23,171 INFO  [main] zookeeper.ZooKeeper: Client environment:user.dir=/home/hbase
2016-12-01 19:13:23,172 INFO  [main] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=regionserver:160200x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:13:23,188 INFO  [main-SendThread(hscale-dev1-dn2:2181)] zookeeper.Login: successfully logged in.
2016-12-01 19:13:23,194 INFO  [main-SendThread(hscale-dev1-dn2:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:13:23,198 INFO  [main-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn2/10.60.70.12:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:13:23,200 INFO  [Thread-11] zookeeper.Login: TGT refresh thread started.
2016-12-01 19:13:23,202 INFO  [main-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn2/10.60.70.12:2181, initiating session
2016-12-01 19:13:23,207 INFO  [Thread-11] zookeeper.Login: TGT valid starting at:        Thu Dec 01 19:13:23 IST 2016
2016-12-01 19:13:23,207 INFO  [Thread-11] zookeeper.Login: TGT expires:                  Fri Dec 02 19:13:23 IST 2016
2016-12-01 19:13:23,207 INFO  [Thread-11] zookeeper.Login: TGT refresh sleeping until: Fri Dec 02 14:35:22 IST 2016
2016-12-01 19:13:23,235 INFO  [main-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn2/10.60.70.12:2181, sessionid = 0x258ba9a256f000e, negotiated timeout = 120000
2016-12-01 19:13:23,323 INFO  [ZKSecretWatcher-leaderElector] zookeeper.ZKLeaderManager: Found existing leader with ID: hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:13:23,334 INFO  [RpcServer.responder] ipc.RpcServer: RpcServer.responder: starting
2016-12-01 19:13:23,335 INFO  [RpcServer.listener,port=16020] ipc.RpcServer: RpcServer.listener,port=16020: starting
2016-12-01 19:13:23,388 INFO  [main] mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-12-01 19:13:23,390 INFO  [main] http.HttpRequestLog: Http request log for http.requests.regionserver is not defined
2016-12-01 19:13:23,398 INFO  [main] http.HttpServer: Added global filter 'safety' (class=org.apache.hadoop.hbase.http.HttpServer$QuotingInputFilter)
2016-12-01 19:13:23,399 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context regionserver
2016-12-01 19:13:23,399 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-12-01 19:13:23,399 INFO  [main] http.HttpServer: Added filter static_user_filter (class=org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-12-01 19:13:23,408 INFO  [main] http.HttpServer: Jetty bound to port 16030
2016-12-01 19:13:23,409 INFO  [main] mortbay.log: jetty-6.1.26.hwx
2016-12-01 19:13:23,651 INFO  [main] mortbay.log: Started SelectChannelConnector@0.0.0.0:16030
2016-12-01 19:13:23,692 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x6e613179 connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:13:23,692 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x6e6131790x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:13:23,692 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:13:23,693 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn2/10.60.70.12:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:13:23,694 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn2/10.60.70.12:2181, initiating session
2016-12-01 19:13:23,715 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn2/10.60.70.12:2181, sessionid = 0x258ba9a256f000f, negotiated timeout = 120000
2016-12-01 19:13:23,779 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.HRegionServer: ClusterId : 3412f612-2ca0-4470-bd19-83048d75e75d
2016-12-01 19:13:23,830 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.MemStoreFlusher: globalMemStoreLimit=1.2 G, globalMemStoreLimitLowMark=1.1 G, maxHeap=3 G
2016-12-01 19:13:23,833 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.HRegionServer: CompactionChecker runs every 10sec
2016-12-01 19:13:23,863 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.RegionServerCoprocessorHost: System coprocessor loading is enabled
2016-12-01 19:13:23,863 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.RegionServerCoprocessorHost: Table coprocessor loading is enabled
2016-12-01 19:13:23,936 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:13:24,004 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config: Created Splice configuration.
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  LOG = [org.apache.log4j.Logger@2d9334c8]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  activeTransactionCacheSize = [4096]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authentication = [NATIVE]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationCustomProvider = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationLdapSearchauthdn = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationLdapSearchauthpw = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationLdapSearchbase = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationLdapSearchfilter = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationLdapServer = [null]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationNativeAlgorithm = [SHA-512]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  authenticationNativeCreateCredentialsDatabase = [true]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  backupParallelism = [16]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  backupPath = [/backup]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  batchOnceBatchSize = [50000]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  broadcastRegionMbThreshold = [30]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  broadcastRegionRowThreshold = [1000000]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  cardinalityPrecision = [14]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  clientPause = [1000]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  compactionReservedSlots = [1]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  completedTxnCacheSize = [131072]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  completedTxnConcurrency = [128]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  compressionAlgorithm = [snappy]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  configSource = [com.splicemachine.access.HBaseConfigurationSource@35be7ed1]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  coreWriterThreads = [2]
2016-12-01 19:13:24,008 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ddlDrainingInitialWait = [1000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ddlDrainingMaximumWait = [100000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ddlRefreshInterval = [10000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  debugDumpBindTree = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  debugDumpClassFile = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  debugDumpOptimizedTree = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  debugLogStatementContext = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.block.access.key.update.interval = [600]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.block.access.token.enable = [true]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.block.access.token.lifetime = [600]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.block.scanner.volume.bytes.per.second = [1048576]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.blockreport.initialDelay = [120]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.blockreport.intervalMsec = [21600000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.blockreport.split.threshold = [1000000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.blocksize = [134217728]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.bytes-per-checksum = [512]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.cachereport.intervalMsec = [10000]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client-write-packet-size = [65536]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.block.write.replace-datanode-on-failure.best-effort = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.block.write.replace-datanode-on-failure.enable = [true]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.block.write.replace-datanode-on-failure.policy = [DEFAULT]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.block.write.retries = [3]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.cached.conn.retry = [3]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.context = [default]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.datanode-restart.timeout = [30]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.domain.socket.data.traffic = [false]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.failover.connection.retries = [0]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.failover.connection.retries.on.timeouts = [0]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.failover.max.attempts = [15]
2016-12-01 19:13:24,009 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.failover.sleep.base.millis = [500]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.failover.sleep.max.millis = [15000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.file-block-storage-locations.num-threads = [10]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.file-block-storage-locations.timeout.millis = [1000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.https.keystore.resource = [ssl-client.xml]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.https.need-auth = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.mmap.cache.size = [256]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.mmap.cache.timeout.ms = [3600000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.mmap.enabled = [true]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.mmap.retry.timeout.ms = [300000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.read.shortcircuit = [true]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.read.shortcircuit.buffer.size = [131072]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.read.shortcircuit.skip.checksum = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.read.shortcircuit.streams.cache.expiry.ms = [300000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.read.shortcircuit.streams.cache.size = [4096]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.retry.policy.enabled = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.short.circuit.replica.stale.threshold.ms = [1800000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.slow.io.warning.threshold.ms = [30000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.use.datanode.hostname = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.use.legacy.blockreader.local = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.client.write.exclude.nodes.cache.expiry.interval.millis = [600000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.cluster.administrators = [ hdfs]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.content-summary.limit = [5000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.address = [0.0.0.0:1019]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.available-space-volume-choosing-policy.balanced-space-preference-fraction = [0.75f]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.available-space-volume-choosing-policy.balanced-space-threshold = [10737418240]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.balance.bandwidthPerSec = [6250000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.block-pinning.enabled = [false]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.block.id.layout.upgrade.threads = [12]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.bp-ready.timeout = [20]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.cache.revocation.polling.ms = [500]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.cache.revocation.timeout.ms = [900000]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.data.dir = [/hadoop/hdfs/data]
2016-12-01 19:13:24,010 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.data.dir.perm = [750]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.directoryscan.interval = [21600]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.directoryscan.threads = [1]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.dns.interface = [default]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.dns.nameserver = [default]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.drop.cache.behind.reads = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.drop.cache.behind.writes = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.du.reserved = [1073741824]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.failed.volumes.tolerated = [0]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.fsdatasetcache.max.threads.per.volume = [4]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.handler.count = [20]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.hdfs-blocks-metadata.enabled = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.http.address = [0.0.0.0:1022]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.https.address = [0.0.0.0:50475]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.ipc.address = [0.0.0.0:8010]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.kerberos.principal = [dn/_HOST@HSCALE.COM]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.keytab.file = [/etc/security/keytabs/dn.service.keytab]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.max.locked.memory = [0]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.max.transfer.threads = [1024]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.readahead.bytes = [4193404]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.scan.period.hours = [504]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.shared.file.descriptor.paths = [/dev/shm,/tmp]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.slow.io.warning.threshold.ms = [300]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.sync.behind.writes = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.transfer.socket.recv.buffer.size = [131072]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.transfer.socket.send.buffer.size = [131072]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.datanode.use.datanode.hostname = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.default.chunk.view.size = [32768]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.domain.socket.path = [/var/lib/hadoop-hdfs/dn_socket]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.encrypt.data.transfer = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.encrypt.data.transfer.cipher.key.bitlength = [128]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.encrypt.data.transfer.cipher.suites = [AES/CTR/NoPadding]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.ha.automatic-failover.enabled = [false]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.ha.fencing.ssh.connect-timeout = [30000]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.ha.log-roll.period = [120]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.ha.tail-edits.period = [60]
2016-12-01 19:13:24,011 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.heartbeat.interval = [3]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.hosts.exclude = [/etc/hadoop/conf/dfs.exclude]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.http.policy = [HTTP_ONLY]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.https.port = [50470]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.https.server.keystore.resource = [ssl-server.xml]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.image.compress = [false]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.image.compression.codec = [org.apache.hadoop.io.compress.DefaultCodec]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.image.transfer.bandwidthPerSec = [0]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.image.transfer.chunksize = [65536]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.image.transfer.timeout = [60000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.journalnode.edits.dir = [/hadoop/hdfs/journalnode]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.journalnode.http-address = [0.0.0.0:8480]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.journalnode.https-address = [0.0.0.0:8481]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.journalnode.rpc-address = [0.0.0.0:8485]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.accesstime.precision = [0]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.acls.enabled = [false]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.audit.log.async = [true]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.audit.loggers = [default]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.avoid.read.stale.datanode = [true]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.avoid.write.stale.datanode = [true]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.backup.address = [0.0.0.0:50100]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.backup.http-address = [0.0.0.0:50105]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.blocks.per.postponedblocks.rescan = [10000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.check.period = [60]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.dir = [/hadoop/hdfs/namesecondary]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.edits.dir = [/hadoop/hdfs/namesecondary]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.max-retries = [3]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.period = [21600]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.checkpoint.txns = [1000000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.datanode.registration.ip-hostname-check = [true]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.decommission.blocks.per.interval = [500000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.decommission.interval = [30]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.decommission.max.concurrent.tracked.nodes = [100]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.delegation.key.update-interval = [86400000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.delegation.token.max-lifetime = [604800000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.delegation.token.renew-interval = [86400000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.edit.log.autoroll.check.interval.ms = [300000]
2016-12-01 19:13:24,012 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.edit.log.autoroll.multiplier.threshold = [2.0]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.edits.dir = [/hadoop/hdfs/namenode]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.edits.journal-plugin.qjournal = [org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.edits.noeditlogchannelflush = [false]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.enable.retrycache = [true]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.max-blocks-per-file = [1048576]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.max-component-length = [255]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.max-directory-items = [1048576]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.max-xattr-size = [16384]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.max-xattrs-per-inode = [32]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fs-limits.min-block-size = [1048576]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.fslock.fair = [false]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.full.block.report.lease.length.ms = [300000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.handler.count = [100]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.heartbeat.recheck-interval = [300000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.http-address = [hscale-dev1-nn:50070]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.https-address = [hscale-dev1-nn:50470]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.inotify.max.events.per.rpc = [1000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.invalidate.work.pct.per.iteration = [0.32f]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.kerberos.internal.spnego.principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.kerberos.principal = [nn/_HOST@HSCALE.COM]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.kerberos.principal.pattern = [*]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.keytab.file = [/etc/security/keytabs/nn.service.keytab]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.lazypersist.file.scrub.interval.sec = [300]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.lifeline.handler.ratio = [0.10]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.list.cache.directives.num.responses = [100]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.list.cache.pools.num.responses = [100]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.list.encryption.zones.num.responses = [100]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.max.extra.edits.segments.retained = [10000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.max.full.block.report.leases = [6]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.max.objects = [0]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.metrics.logger.period.seconds = [600]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.name.dir = [/hadoop/hdfs/namenode]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.name.dir.restore = [true]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.num.checkpoints.retained = [2]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.num.extra.edits.retained = [1000000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.path.based.cache.block.map.allocation.percent = [0.25]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.path.based.cache.refresh.interval.ms = [30000]
2016-12-01 19:13:24,013 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.path.based.cache.retry.interval.ms = [30000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.reject-unresolved-dn-topology-mapping = [false]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.replication.considerLoad = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.replication.interval = [3]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.replication.min = [1]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.replication.work.multiplier.per.iteration = [2]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.resource.check.interval = [5000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.resource.checked.volumes.minimum = [1]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.resource.du.reserved = [104857600]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.retrycache.expirytime.millis = [600000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.retrycache.heap.percent = [0.03f]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.rpc-address = [hscale-dev1-nn:8020]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.safemode.extension = [30000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.safemode.min.datanodes = [0]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.safemode.threshold-pct = [0.999]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.secondary.http-address = [hscale-dev1-dn1:50090]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.secondary.https-address = [0.0.0.0:50091]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.stale.datanode.interval = [30000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.startup.delay.block.deletion.sec = [3600]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.support.allow.format = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.top.enabled = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.top.num.users = [10]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.top.window.num.buckets = [10]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.top.windows.minutes = [1,5,25]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.write.stale.datanode.ratio = [1.0f]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.namenode.xattrs.enabled = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.permissions.enabled = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.permissions.superusergroup = [hdfs]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.replication = [3]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.replication.max = [50]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.secondary.namenode.kerberos.internal.spnego.principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.secondary.namenode.kerberos.principal = [nn/_HOST@HSCALE.COM]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.secondary.namenode.keytab.file = [/etc/security/keytabs/nn.service.keytab]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.short.circuit.shared.memory.watcher.interrupt.check.ms = [60000]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.storage.policy.enabled = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.stream-buffer-size = [4096]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.support.append = [true]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.user.home.dir.prefix = [/user]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.web.authentication.kerberos.keytab = [/etc/security/keytabs/spnego.service.keytab]
2016-12-01 19:13:24,014 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.web.authentication.kerberos.principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.enabled = [true]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.rest-csrf.browser-useragents-regex = [^Mozilla.*,^Opera.*]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.rest-csrf.custom-header = [X-XSRF-HEADER]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.rest-csrf.enabled = [false]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.rest-csrf.methods-to-ignore = [GET,OPTIONS,HEAD,TRACE]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.ugi.expire.after.access = [600000]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  dfs.webhdfs.user.provider.user.pattern = [^[A-Za-z_][A-Za-z0-9._-]*[$]?$]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackLocalLatency = [1]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackMinimumRowCount = [20]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackNullFraction = [0.1]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackOpencloseLatency = [2000]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackRegionRowCount = [5000000]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackRemoteLatencyRatio = [10]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fallbackRowWidth = [170]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  file.blocksize = [67108864]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  file.bytes-per-checksum = [512]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  file.client-write-packet-size = [65536]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  file.replication = [1]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  file.stream-buffer-size = [4096]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.file.impl = [org.apache.hadoop.fs.local.LocalFs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.ftp.impl = [org.apache.hadoop.fs.ftp.FtpFs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.har.impl = [org.apache.hadoop.fs.HarFs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.hdfs.impl = [org.apache.hadoop.fs.Hdfs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.swebhdfs.impl = [org.apache.hadoop.fs.SWebHdfs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.viewfs.impl = [org.apache.hadoop.fs.viewfs.ViewFs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.AbstractFileSystem.webhdfs.impl = [org.apache.hadoop.fs.WebHdfs]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.automatic.close = [true]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.client.resolve.remote.symlinks = [true]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.defaultFS = [hdfs://hscale-dev1-nn:8020]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.df.interval = [60000]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.du.interval = [600000]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.ftp.host = [0.0.0.0]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.ftp.host.port = [21]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.har.impl.disable.cache = [true]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.permissions.umask-mode = [022]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3.block.size = [67108864]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3.buffer.dir = [/tmp/hadoop-hbase/s3]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3.maxRetries = [4]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3.sleepTimeSeconds = [10]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.attempts.maximum = [10]
2016-12-01 19:13:24,015 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.buffer.dir = [/tmp/hadoop-hbase/s3a]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.connection.establish.timeout = [5000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.connection.maximum = [15]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.connection.ssl.enabled = [true]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.connection.timeout = [50000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.fast.buffer.size = [1048576]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.fast.upload = [false]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.impl = [org.apache.hadoop.fs.s3a.S3AFileSystem]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.max.total.tasks = [1000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.multipart.purge = [false]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.multipart.purge.age = [86400]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.multipart.size = [104857600]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.multipart.threshold = [2147483647]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.paging.maximum = [5000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.threads.core = [15]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.threads.keepalivetime = [60]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3a.threads.max = [256]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3n.block.size = [67108864]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3n.multipart.copy.block.size = [5368709120]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3n.multipart.uploads.block.size = [67108864]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.s3n.multipart.uploads.enabled = [false]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.swift.impl = [org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.trash.checkpoint.interval = [0]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  fs.trash.interval = [360]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ftp.blocksize = [67108864]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ftp.bytes-per-checksum = [512]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ftp.client-write-packet-size = [65536]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ftp.replication = [3]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ftp.stream-buffer-size = [4096]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.failover-controller.active-standby-elector.zk.op.retries = [120]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.failover-controller.cli-check.rpc-timeout.ms = [20000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.failover-controller.graceful-fence.connection.retries = [1]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.failover-controller.graceful-fence.rpc-timeout.ms = [5000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.failover-controller.new-active.rpc-timeout.ms = [60000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.health-monitor.check-interval.ms = [1000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.health-monitor.connect-retry-interval.ms = [1000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.health-monitor.rpc-timeout.ms = [45000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.health-monitor.sleep-after-disconnect.ms = [1000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.zookeeper.acl = [world:anyone:rwcda]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.zookeeper.parent-znode = [/hadoop-ha]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ha.zookeeper.session-timeout.ms = [5000]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.common.configuration.version = [0.23.0]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.fuse.connection.timeout = [300]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.fuse.timer.period = [5]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.hdfs.configuration.version = [1]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.kerberos.keytab = [/home/hbase/hadoop.keytab]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.kerberos.principal = [HTTP/_HOST@LOCALHOST]
2016-12-01 19:13:24,016 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.signature.secret.file = [/home/hbase/hadoop-http-auth-signature-secret]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.simple.anonymous.allowed = [true]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.token.validity = [36000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.authentication.type = [simple]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.cross-origin.allowed-headers = [X-Requested-With,Content-Type,Accept,Origin]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.cross-origin.allowed-methods = [GET,POST,HEAD]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.cross-origin.allowed-origins = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.cross-origin.enabled = [false]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.cross-origin.max-age = [1800]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.filter.initializers = [org.apache.hadoop.http.lib.StaticUserWebFilter,org.apache.hadoop.security.HttpCrossOriginFilterInitializer]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.http.staticuser.user = [dr.who]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.jetty.logs.serve.aliases = [true]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.kerberos.kinit.command = [kinit]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.policy.file = [hbase-policy.xml]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.HTTP.groups = [users]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.HTTP.hosts = [hscale-dev1-dn2]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hbase.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hbase.hosts = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hcat.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hcat.hosts = [hscale-dev1-dn2]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hdfs.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hdfs.hosts = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hive.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.hive.hosts = [hscale-dev1-dn2]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.oozie.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.oozie.hosts = [hscale-dev1-dn3]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.yarn.groups = [*]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.proxyuser.yarn.hosts = [hscale-dev1-nn]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.jaas.context = [Client]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.rm.enabled = [false]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.secure = [false]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.system.acls = [sasl:yarn@, sasl:mapred@, sasl:hdfs@]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.connection.timeout.ms = [15000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.quorum = [hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.retry.ceiling.ms = [60000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.retry.interval.ms = [1000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.retry.times = [5]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.root = [/registry]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.registry.zk.session.timeout.ms = [60000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.rpc.protection = [authentication]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.rpc.socket.factory.class.default = [org.apache.hadoop.net.StandardSocketFactory]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.auth_to_local = [RULE:[1:$1@$0](ambari-qa@HSCALE.COM)s/.*/ambari-qa/
RULE:[1:$1@$0](hbase@HSCALE.COM)s/.*/hbase/
RULE:[1:$1@$0](hdfs@HSCALE.COM)s/.*/hdfs/
RULE:[1:$1@$0](spark@HSCALE.COM)s/.*/spark/
RULE:[1:$1@$0](.*@HSCALE.COM)s/@.*//
RULE:[2:$1@$0](amshbase@HSCALE.COM)s/.*/ams/
RULE:[2:$1@$0](amszk@HSCALE.COM)s/.*/ams/
RULE:[2:$1@$0](dn@HSCALE.COM)s/.*/hdfs/
RULE:[2:$1@$0](hbase@HSCALE.COM)s/.*/hbase/
RULE:[2:$1@$0](hive@HSCALE.COM)s/.*/hive/
RULE:[2:$1@$0](jhs@HSCALE.COM)s/.*/mapred/
RULE:[2:$1@$0](nm@HSCALE.COM)s/.*/yarn/
RULE:[2:$1@$0](nn@HSCALE.COM)s/.*/hdfs/
RULE:[2:$1@$0](oozie@HSCALE.COM)s/.*/oozie/
RULE:[2:$1@$0](rm@HSCALE.COM)s/.*/yarn/
RULE:[2:$1@$0](yarn@HSCALE.COM)s/.*/yarn/
DEFAULT]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.authentication = [kerberos]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.authorization = [true]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.crypto.buffer.size = [8192]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.crypto.cipher.suite = [AES/CTR/NoPadding]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.crypto.codec.classes.aes.ctr.nopadding = [org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec,org.apache.hadoop.crypto.JceAesCtrCryptoCodec]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.dns.log-slow-lookups.enabled = [false]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.dns.log-slow-lookups.threshold.ms = [1000]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping = [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]
2016-12-01 19:13:24,017 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.directory.search.timeout = [10000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.search.attr.group.name = [cn]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.search.attr.member = [member]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.search.filter.group = [(objectClass=group)]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.search.filter.user = [(&(objectClass=user)(sAMAccountName={0}))]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.group.mapping.ldap.ssl = [false]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.groups.cache.secs = [300]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.groups.cache.warn.after.ms = [5000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.groups.negative-cache.secs = [30]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.instrumentation.requires.admin = [false]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.java.secure.random.algorithm = [SHA1PRNG]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.kms.client.authentication.retry-count = [1]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.kms.client.encrypted.key.cache.expiry = [43200000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.kms.client.encrypted.key.cache.low-watermark = [0.3f]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.kms.client.encrypted.key.cache.num.refill.threads = [2]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.kms.client.encrypted.key.cache.size = [500]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.random.device.file.path = [/dev/urandom]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.security.uid.cache.secs = [14400]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.shell.safely.delete.limit.num.files = [100]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.client.conf = [ssl-client.xml]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.enabled = [false]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.enabled.protocols = [TLSv1]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.hostname.verifier = [DEFAULT]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.keystores.factory.class = [org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.require.client.cert = [false]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.ssl.server.conf = [ssl-server.xml]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.tmp.dir = [/tmp/hadoop-hbase]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.user.group.static.mapping.overrides = [dr.who=;]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.util.hash.type = [murmur]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hadoop.work.around.non.threadsafe.getpwuid = [false]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.auth.key.update.interval = [86400000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.auth.token.max.lifetime = [604800000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.balancer.period = [60000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.bulkload.retries.number = [10]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.bulkload.staging.dir = [/apps/hbase/staging]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.cells.scanned.per.heartbeat.check = [10000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.ipc.pool.size = [10]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.keyvalue.maxsize = [1048576]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.localityCheck.threadPoolSize = [2]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.max.perregion.tasks = [100]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.max.perserver.tasks = [5]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.max.total.tasks = [100]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.pause = [100]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.retries.number = [5]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.scanner.caching = [1000]
2016-12-01 19:13:24,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.scanner.timeout.period = [60000]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.client.write.buffer = [2097152]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.cluster.distributed = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.column.max.version = [1]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.config.read.zookeeper.config = [false]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coordinated.state.manager.class = [org.apache.hadoop.hbase.coordination.ZkCoordinatedStateManager]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.abortonerror = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.enabled = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.master.classes = [org.apache.hadoop.hbase.security.access.AccessController,com.splicemachine.hbase.SpliceMasterObserver]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.region.classes = [org.apache.hadoop.hbase.security.access.AccessController,org.apache.hadoop.hbase.security.token.TokenProvider,org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint,com.splicemachine.hbase.MemstoreAwareObserver,com.splicemachine.derby.hbase.SpliceIndexEndpoint,com.splicemachine.hbase.RegionSizeEndpoint,com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint,com.splicemachine.si.data.hbase.coprocessor.SIObserver,com.splicemachine.hbase.BackupEndpointObserver]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.regionserver.classes = [org.apache.hadoop.hbase.security.access.AccessController,com.splicemachine.hbase.RegionServerLifecycleObserver]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.coprocessor.user.enabled = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.data.umask = [000]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.data.umask.enable = [false]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.defaults.for.version = [1.1.2.2.4.2.0-258]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.defaults.for.version.skip = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.dfs.client.read.shortcircuit.buffer.size = [131072]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.dynamic.jars.dir = [hdfs://hscale-dev1-nn:8020/apps/hbase/data/lib]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.fs.tmp.dir = [/user/hbase/hbase-staging]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.majorcompaction = [604800000]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.majorcompaction.jitter = [0.50]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.max.filesize = [10737418240]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.memstore.block.multiplier = [4]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.memstore.flush.size = [134217728]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.memstore.mslab.enabled = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.percolumnfamilyflush.size.lower.bound = [16777216]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hregion.preclose.flush.size = [5242880]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.blockingStoreFiles = [20]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.blockingWaitTime = [90000]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.bytes.per.checksum = [16384]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.checksum.algorithm = [CRC32]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compaction.kv.max = [10]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compaction.max = [10]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compaction.max.size = [260046848]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compaction.min = [5]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compaction.min.size = [16777216]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.compactionThreshold = [3]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.defaultengine.compactionpolicy.class = [com.splicemachine.compactions.SpliceDefaultCompactionPolicy]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.defaultengine.compactor.class = [com.splicemachine.compactions.SpliceDefaultCompactor]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.flusher.count = [2]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.hstore.time.to.purge.deletes = [0]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.htable.threads.max = [96]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.http.filter.initializers = [org.apache.hadoop.hbase.http.lib.StaticUserWebFilter]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.http.max.threads = [10]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.http.staticuser.user = [dr.stack]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.client.fallback-to-simple-auth-allowed = [false]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.client.tcpnodelay = [true]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.server.callqueue.handler.factor = [0.1]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.server.callqueue.read.ratio = [0]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.server.callqueue.scan.ratio = [0]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.warn.response.size = [-1]
2016-12-01 19:13:24,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.ipc.warn.response.time = [-1]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.lease.recovery.dfs.timeout = [64000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.lease.recovery.timeout = [900000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.local.dir = [/tmp/hbase-hbase/local]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.catalog.timeout = [600000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.distributed.log.replay = [false]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.hfilecleaner.plugins = [org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.info.bindAddress = [0.0.0.0]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.info.port = [16010]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.infoserver.redirect = [true]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.kerberos.principal = [hbase/_HOST@HSCALE.COM]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.keytab.file = [/etc/security/keytabs/hbase.service.keytab]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.loadbalance.bytable = [TRUE]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.loadbalancer.class = [org.apache.hadoop.hbase.master.balancer.StochasticLoadBalancer]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.logcleaner.plugins = [org.apache.hadoop.hbase.master.cleaner.TimeToLiveLogCleaner]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.logcleaner.ttl = [600000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.normalizer.class = [org.apache.hadoop.hbase.master.normalizer.SimpleRegionNormalizer]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.master.port = [16000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.metrics.exposeOperationTimes = [true]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.metrics.showTableName = [true]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.mvcc.impl = [org.apache.hadoop.hbase.regionserver.SIMultiVersionConsistencyControl]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.normalizer.enabled = [false]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.normalizer.period = [1800000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.online.schema.update.enable = [true]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.region.replica.replication.enabled = [false]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regions.slop = [0.01]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.catalog.timeout = [600000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.checksum.verify = [true]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.dns.interface = [default]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.dns.nameserver = [default]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.global.memstore.size = [0.4]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.global.memstore.size.lower.limit = [0.9]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.handler.abort.on.error.percent = [0.5]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.handler.count = [40]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.hlog.reader.impl = [org.apache.hadoop.hbase.regionserver.wal.ProtobufLogReader]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.hlog.writer.impl = [org.apache.hadoop.hbase.regionserver.wal.ProtobufLogWriter]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.info.bindAddress = [0.0.0.0]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.info.port = [16030]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.info.port.auto = [false]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.kerberos.principal = [hbase/_HOST@HSCALE.COM]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.keytab.file = [/etc/security/keytabs/hbase.service.keytab]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.lease.period = [1200000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.logroll.errors.tolerated = [2]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.logroll.period = [3600000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.maxlogs = [48]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.msginterval = [3000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.optionalcacheflushinterval = [3600000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.port = [16020]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.region.split.policy = [org.apache.hadoop.hbase.regionserver.IncreasingToUpperBoundRegionSplitPolicy]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.regionSplitLimit = [1000]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.storefile.refresh.period = [0]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.thread.compaction.large = [1]
2016-12-01 19:13:24,020 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.thread.compaction.small = [4]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.thrift.compact = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.thrift.framed = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.thrift.framed.max_frame_size_in_mb = [2]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.wal.codec = [org.apache.hadoop.hbase.regionserver.wal.WALCellCodec]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.regionserver.wal.enablecompression = [TRUE]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.replication.rpc.codec = [org.apache.hadoop.hbase.codec.KeyValueCodecWithTags]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.filter.classes = [org.apache.hadoop.hbase.rest.filter.GzipFilter]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.port = [8080]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.readonly = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.support.proxyuser = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.threads.max = [100]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rest.threads.min = [2]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rootdir = [hdfs://hscale-dev1-nn:8020/apps/hbase/data]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rootdir.perms = [700]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rpc.protection = [authentication]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rpc.shortoperation.timeout = [10000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rpc.timeout = [1200000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.rs.cacheblocksonwrite = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.security.authentication = [kerberos]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.security.authorization = [true]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.security.exec.permission.checks = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.security.visibility.mutations.checkauths = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.server.compactchecker.interval.multiplier = [1000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.server.scanner.max.result.size = [104857600]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.server.thread.wakefrequency = [10000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.server.versionfile.writeattempts = [3]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.snapshot.enabled = [true]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.snapshot.master.timeout.millis = [300000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.snapshot.region.timeout = [300000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.snapshot.restore.failsafe.name = [hbase-failsafe-{snapshot.name}-{restore.timestamp}]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.snapshot.restore.take.failsafe.snapshot = [true]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.splitlog.manager.timeout = [3000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.listener.class = [org.apache.hadoop.hbase.client.ClusterStatusListener$MulticastListener]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.multicast.address.ip = [226.1.1.3]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.multicast.address.port = [16100]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.multicast.port = [16100]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.published = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.status.publisher.class = [org.apache.hadoop.hbase.master.ClusterStatusPublisher$MulticastPublisher]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.storescanner.parallel.seek.enable = [false]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.storescanner.parallel.seek.threads = [10]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.superuser = [hbase]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.table.lock.enable = [true]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.table.max.rowsize = [1073741824]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.thrift.htablepool.size.max = [1000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.thrift.maxQueuedRequests = [1000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.thrift.maxWorkerThreads = [1000]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.thrift.minWorkerThreads = [16]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.tmp.dir = [/tmp/hbase-hbase]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.wal.disruptor.batch = [TRUE]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.wal.provider = [multiwal]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.wal.regiongrouping.numgroups = [16]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.dns.interface = [default]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.dns.nameserver = [default]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.leaderport = [3888]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.peerport = [2888]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.clientPort = [2181]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.dataDir = [/tmp/hbase-hbase/zookeeper]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.initLimit = [10]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.maxClientCnxns = [300]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.syncLimit = [5]
2016-12-01 19:13:24,021 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.property.tickTime = [6000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.quorum = [hscale-dev1-dn1,hscale-dev1-dn3,hscale-dev1-dn2,hscale-dev1-dn4]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbase.zookeeper.useMulti = [true]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbaseSecurityAuthentication = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hbaseSecurityAuthorization = [simple]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hfile.block.bloom.cacheonwrite = [TRUE]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hfile.block.cache.size = [0.40]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hfile.block.index.cacheonwrite = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hfile.format.version = [3]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  hfile.index.block.max.size = [131072]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ignoreSavePoints = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  importMaxQuotedColumnLines = [50000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  indexBatchSize = [4000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  indexFetchSampleSize = [128]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  indexLookupBlocks = [5]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.compression.codec.bzip2.library = [system-native]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.compression.codecs = [org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.file.buffer.size = [131072]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.map.index.interval = [128]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.map.index.skip = [0]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.mapfile.bloom.error.rate = [0.005]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.mapfile.bloom.size = [1048576]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.native.lib.available = [true]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.seqfile.compress.blocksize = [1000000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.seqfile.lazydecompress = [true]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.seqfile.local.dir = [/tmp/hadoop-hbase/io/local]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.seqfile.sorter.recordlimit = [1000000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.serializations = [org.apache.hadoop.io.serializer.WritableSerialization]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.skip.checksum.errors = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.storefile.bloom.block.size = [131072]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  io.storefile.bloom.error.rate = [0.005]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.connect.max.retries = [50]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.connect.max.retries.on.timeouts = [45]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.connect.retry.interval = [1000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.connect.timeout = [20000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.connection.maxidletime = [30000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.fallback-to-simple-auth-allowed = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.idlethreshold = [8000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.client.kill.max = [10]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.server.listen.queue.size = [128]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.server.log.slow.rpc = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.server.max.connections = [0]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipc.server.tcpnodelay = [true]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  ipcThreads = [40]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  kryoPoolSize = [1100]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  map.sort.class = [org.apache.hadoop.util.QuickSort]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapred.child.java.opts = [-Xmx200m]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.admin.map.child.java.opts = [-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.4.2.0-258]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.admin.reduce.child.java.opts = [-server -XX:NewRatio=8 -Djava.net.preferIPv4Stack=true -Dhdp.version=2.4.2.0-258]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.admin.user.env = [LD_LIBRARY_PATH=/usr/hdp/2.4.2.0-258/hadoop/lib/native:/usr/hdp/2.4.2.0-258/hadoop/lib/native/Linux-amd64-64]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.am.max-attempts = [2]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.app-submission.cross-platform = [false]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.application.classpath = [$PWD/mr-framework/hadoop/share/hadoop/mapreduce/*:$PWD/mr-framework/hadoop/share/hadoop/mapreduce/lib/*:$PWD/mr-framework/hadoop/share/hadoop/common/*:$PWD/mr-framework/hadoop/share/hadoop/common/lib/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/*:$PWD/mr-framework/hadoop/share/hadoop/yarn/lib/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/*:$PWD/mr-framework/hadoop/share/hadoop/hdfs/lib/*:$PWD/mr-framework/hadoop/share/hadoop/tools/lib/*:/usr/hdp/2.4.2.0-258/hadoop/lib/hadoop-lzo-0.6.0.2.4.2.0-258.jar:/etc/hadoop/conf/secure]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.application.framework.path = [/hdp/apps/2.4.2.0-258/mapreduce/mapreduce.tar.gz#mr-framework]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.client.completion.pollinterval = [5000]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.client.output.filter = [FAILED]
2016-12-01 19:13:24,022 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.client.progressmonitor.pollinterval = [1000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.client.submit.file.replication = [10]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.cluster.acls.enabled = [false]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.cluster.administrators = [ hadoop]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.cluster.local.dir = [/tmp/hadoop-hbase/mapred/local]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.cluster.temp.dir = [/tmp/hadoop-hbase/mapred/temp]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.fileoutputcommitter.algorithm.version = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.framework.name = [yarn]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.ifile.readahead = [true]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.ifile.readahead.bytes = [4194304]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.input.fileinputformat.list-status.num-threads = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.input.fileinputformat.split.minsize = [0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.input.lineinputformat.linespermap = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.acl-modify-job = [ ]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.acl-view-job = [ ]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.classloader = [false]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.committer.setup.cleanup.needed = [true]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.complete.cancel.delegation.tokens = [true]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.counters.max = [130]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.emit-timeline-data = [false]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.end-notification.max.attempts = [5]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.end-notification.max.retry.interval = [5000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.end-notification.retry.attempts = [0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.end-notification.retry.interval = [1000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.hdfs-servers = [hdfs://hscale-dev1-nn:8020]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.jvm.numtasks = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.map.output.collector.class = [org.apache.hadoop.mapred.MapTask$MapOutputBuffer]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.maps = [2]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.max.split.locations = [10]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.maxtaskfailures.per.tracker = [3]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.queuename = [default]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.reduce.shuffle.consumer.plugin.class = [org.apache.hadoop.mapreduce.task.reduce.Shuffle]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.reduce.slowstart.completedmaps = [0.05]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.reducer.preempt.delay.sec = [0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.reducer.unconditional-preempt.delay.sec = [300]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.reduces = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.running.map.limit = [0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.running.reduce.limit = [0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.minimum-allowed-tasks = [10]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.retry-after-no-speculate = [1000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.retry-after-speculate = [15000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.slowtaskthreshold = [1.0]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.speculative-cap-running-tasks = [0.1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.speculative.speculative-cap-total-tasks = [0.01]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.split.metainfo.maxsize = [10000000]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.token.tracking.ids.enabled = [false]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.ubertask.enable = [false]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.ubertask.maxmaps = [9]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.ubertask.maxreduces = [1]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.job.userlog.retain.hours = [24]
2016-12-01 19:13:24,023 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.address = [hscale-dev1-nn:10020]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.admin.acl = [*]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.admin.address = [0.0.0.0:10033]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.bind-host = [0.0.0.0]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.cleaner.enable = [true]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.cleaner.interval-ms = [86400000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.client.thread-count = [10]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.datestring.cache.size = [200000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.done-dir = [/mr-history/done]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.http.policy = [HTTP_ONLY]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.intermediate-done-dir = [/mr-history/tmp]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.joblist.cache.size = [20000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.keytab = [/etc/security/keytabs/jhs.service.keytab]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.loadedjobs.cache.size = [5]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.max-age-ms = [604800000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.minicluster.fixed.ports = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.move.interval-ms = [180000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.move.thread-count = [3]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.principal = [jhs/_HOST@HSCALE.COM]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.recovery.enable = [true]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.recovery.store.class = [org.apache.hadoop.mapreduce.v2.hs.HistoryServerLeveldbStateStoreService]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.recovery.store.fs.uri = [/tmp/hadoop-hbase/mapred/history/recoverystore]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.recovery.store.leveldb.path = [/hadoop/mapreduce/jhs]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.address = [hscale-dev1-nn:19888]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.rest-csrf.custom-header = [X-XSRF-Header]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.rest-csrf.enabled = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.rest-csrf.methods-to-ignore = [GET,OPTIONS,HEAD]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.spnego-keytab-file = [/etc/security/keytabs/spnego.service.keytab]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobhistory.webapp.spnego-principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.address = [local]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.expire.trackers.interval = [600000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.handler.count = [10]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.heartbeats.in.second = [100]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.http.address = [0.0.0.0:50030]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.instrumentation = [org.apache.hadoop.mapred.JobTrackerMetricsInst]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.jobhistory.block.size = [3145728]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.jobhistory.lru.cache.size = [5]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.jobhistory.task.numberprogresssplits = [12]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.maxtasks.perjob = [-1]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.persist.jobstatus.active = [true]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.persist.jobstatus.dir = [/jobtracker/jobsInfo]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.persist.jobstatus.hours = [1]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.restart.recover = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.retiredjobs.cache.size = [1000]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.staging.root.dir = [/tmp/hadoop-hbase/mapred/staging]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.system.dir = [/tmp/hadoop-hbase/mapred/system]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.taskcache.levels = [2]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.taskscheduler = [org.apache.hadoop.mapred.JobQueueTaskScheduler]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.tasktracker.maxblacklists = [4]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.jobtracker.webinterface.trusted = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.local.clientfactory.class.name = [org.apache.hadoop.mapred.LocalClientFactory]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.cpu.vcores = [1]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.java.opts = [-Xmx1228m]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.log.level = [INFO]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.maxattempts = [4]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.memory.mb = [1536]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.output.compress = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.output.compress.codec = [org.apache.hadoop.io.compress.DefaultCodec]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.skip.maxrecords = [0]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.skip.proc.count.autoincr = [true]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.sort.spill.percent = [0.7]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.map.speculative = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.output.fileoutputformat.compress = [false]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.output.fileoutputformat.compress.codec = [org.apache.hadoop.io.compress.DefaultCodec]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.output.fileoutputformat.compress.type = [BLOCK]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.cpu.vcores = [1]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.input.buffer.percent = [0.0]
2016-12-01 19:13:24,024 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.java.opts = [-Xmx1228m]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.log.level = [INFO]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.markreset.buffer.percent = [0.0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.maxattempts = [4]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.memory.mb = [1536]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.merge.inmem.threshold = [1000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.connect.timeout = [180000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.fetch.retry.enabled = [1]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.fetch.retry.interval-ms = [1000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.fetch.retry.timeout-ms = [30000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.input.buffer.percent = [0.7]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.memory.limit.percent = [0.25]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.merge.percent = [0.66]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.parallelcopies = [30]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.read.timeout = [180000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.shuffle.retry-delay.max.ms = [60000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.skip.maxgroups = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.skip.proc.count.autoincr = [true]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.reduce.speculative = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.connection-keep-alive.enable = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.connection-keep-alive.timeout = [5]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.max.connections = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.max.threads = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.port = [13562]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.ssl.enabled = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.ssl.file.buffer.size = [65536]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.shuffle.transfer.buffer.size = [131072]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.combine.progress.records = [10000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.files.preserve.failedtasks = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.io.sort.factor = [100]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.io.sort.mb = [859]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.merge.progress.records = [10000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile.map.params = [-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile.maps = [0-2]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile.params = [-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile.reduce.params = [-agentlib:hprof=cpu=samples,heap=sites,force=n,thread=y,verbose=n,file=%s]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.profile.reduces = [0-2]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.skip.start.attempts = [2]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.timeout = [300000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.task.userlog.limit.kb = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.dns.interface = [default]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.dns.nameserver = [default]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.healthchecker.interval = [60000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.healthchecker.script.timeout = [600000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.http.address = [0.0.0.0:50060]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.http.threads = [40]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.indexcache.mb = [10]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.instrumentation = [org.apache.hadoop.mapred.TaskTrackerMetricsInst]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.local.dir.minspacekill = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.local.dir.minspacestart = [0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.map.tasks.maximum = [2]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.outofband.heartbeat = [false]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.reduce.tasks.maximum = [2]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.report.address = [127.0.0.1:0]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.taskcontroller = [org.apache.hadoop.mapred.DefaultTaskController]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.taskmemorymanager.monitoringinterval = [5000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  mapreduce.tasktracker.tasks.sleeptimebeforesigkill = [5000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxBufferEntries = [1000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxBufferHeapSize = [3145728]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxDdlWait = [60000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxDependentWrites = [60000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxIndependentWrites = [60000]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxRetries = [5]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  maxWriterThreads = [5]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  namespace = [splice]
2016-12-01 19:13:24,025 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nestedLoopJoinBatchSize = [10]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  net.topology.impl = [org.apache.hadoop.net.NetworkTopology]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  net.topology.node.switch.mapping.impl = [org.apache.hadoop.net.ScriptBasedMapping]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  net.topology.script.file.name = [/etc/hadoop/conf/topology_script.py]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  net.topology.script.number.args = [100]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  networkBindAddress = [0.0.0.0]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  networkBindPort = [1527]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.allow.insecure.ports = [true]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.dump.dir = [/tmp/.hdfs-nfs]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.exports.allowed.hosts = [* rw]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.file.dump.dir = [/tmp/.hdfs-nfs]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.mountd.port = [4242]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.rtmax = [1048576]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.server.port = [2049]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  nfs.wtmax = [1048576]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  olapClientTickTime = [1000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  olapClientWaitTime = [900000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  olapServerBindPort = [60014]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  olapServerThreads = [16]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  olapServerTickLimit = [120]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  optimizerExtraQualifierMultiplier = [0.9]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  optimizerPlanMaximumTimeout = [9223372036854775807]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  optimizerPlanMinimumTimeout = [0]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  partitionCacheExpiration = [60000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  partitionserverJmxPort = [10102]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  partitionserverPort = [16020]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  phoenix.connection.autoCommit = [true]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  phoenix.functions.allowUserDefinedFunctions = [ ]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  phoenix.query.timeoutMs = [60000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  pipelineKryoPoolSize = [1024]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  readResolverQueueSize = [-1]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  readResolverThreads = [4]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  regionLoadUpdateInterval = [5]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  regionMaxFileSize = [10737418240]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  regionServerHandlerCount = [40]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  reservedSlotsTimeout = [60]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  rpc.metrics.quantile.enable = [false]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3.blocksize = [67108864]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3.bytes-per-checksum = [512]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3.client-write-packet-size = [65536]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3.replication = [3]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3.stream-buffer-size = [4096]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3native.blocksize = [67108864]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3native.bytes-per-checksum = [512]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3native.client-write-packet-size = [65536]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3native.replication = [3]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  s3native.stream-buffer-size = [4096]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  sequenceBlockSize = [1000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  sparkIoCompressionCodec = [lz4]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  sparkResultStreamingBatchSize = [1024]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  sparkResultStreamingBatches = [10]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.authentication = [NATIVE]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.authentication.native.algorithm = [SHA-512]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.authentication.native.create.credentials.database = [true]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.client.numConnections = [1]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.client.write.maxDependentWrites = [60000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.client.write.maxIndependentWrites = [60000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.compression = [snappy]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.marshal.kryoPoolSize = [1100]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.olap_server.clientWaitTime = [900000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.ring.bufferSize = [131072]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.splitBlockSize = [67108864]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.timestamp_server.clientWaitTime = [120000]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.txn.activeTxns.cacheSize = [10240]
2016-12-01 19:13:24,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.txn.completedTxns.concurrency = [128]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splice.txn.concurrencyLevel = [4096]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  spliceRootPath = [/splice]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  splitBlockSize = [67108864]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  startupLockWaitPeriod = [1000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  storageFactoryHome = [hdfs://hscale-dev1-nn:8020/apps/hbase/data]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  tableSplitSleepInterval = [500]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  tfile.fs.input.buffer.size = [262144]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  tfile.fs.output.buffer.size = [262144]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  tfile.io.chunk.size = [1048576]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  threadKeepaliveTime = [60]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  timestampBlockSize = [8192]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  timestampClientWaitTime = [120000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  timestampServerBindPort = [60012]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  topkSize = [10]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  transactionKeepAliveInterval = [15000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  transactionKeepAliveThreads = [4]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  transactionLockStripes = [40]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  transactionTimeout = [150000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  upgradeForced = [false]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  upgradeForcedFrom = [null]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  writeMaxFlushesPerRegion = [5]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.acl.enable = [true]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.admin.acl = [yarn,dr.who]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.am.liveness-monitor.expiry-interval-ms = [600000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.admin-command-opts = [-Dhdp.version=2.4.2.0-258]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.command-opts = [-Xmx410m]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.container.log.backups = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.container.log.limit.kb = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.containerlauncher.threadpool-initial-size = [10]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.hard-kill-timeout-ms = [10000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.job.committer.cancel-timeout = [60000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.job.committer.commit-window = [10000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.job.task.listener.thread-count = [30]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.log.level = [INFO]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.resource.cpu-vcores = [1]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.resource.mb = [512]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms = [1000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.am.staging-dir = [/user]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.client-am.ipc.max-retries = [3]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.client-am.ipc.max-retries-on-timeouts = [3]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.client.job.max-retries = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.client.job.retry-interval = [2000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.client.max-retries = [3]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.shuffle.log.backups = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.shuffle.log.limit.kb = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.shuffle.log.separate = [true]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.app.mapreduce.task.container.log.backups = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.application.classpath = [$HADOOP_CONF_DIR, /usr/hdp/current/hadoop-client/*, /usr/hdp/current/hadoop-client/lib/*, /usr/hdp/current/hadoop-hdfs-client/*, /usr/hdp/current/hadoop-hdfs-client/lib/*, /usr/hdp/current/hadoop-yarn-client/*, /usr/hdp/current/hadoop-yarn-client/lib/*, /usr/hdp/current/hadoop-mapreduce-client/*, /usr/hdp/current/hadoop-mapreduce-client/lib/*, /usr/hdp/current/hbase-regionserver/*, /usr/hdp/current/hbase-regionserver/lib/*, /opt/splice/default/lib/*]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.application-client-protocol.poll-interval-ms = [200]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.failover-proxy-provider = [org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.failover-retries = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.failover-retries-on-socket-timeouts = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.max-cached-nodemanagers-proxies = [0]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.nodemanager-client-async.thread-pool-max-size = [500]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.nodemanager-connect.max-wait-ms = [60000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.client.nodemanager-connect.retry-interval-ms = [10000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.dispatcher.drain-events.timeout = [300000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.fail-fast = [false]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.http.policy = [HTTP_ONLY]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.ipc.rpc.class = [org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.log-aggregation-enable = [true]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.log-aggregation.retain-check-interval-seconds = [-1]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.log-aggregation.retain-seconds = [2592000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.log.server.url = [http://hscale-dev1-nn:19888/jobhistory/logs]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nm.liveness-monitor.expiry-interval-ms = [600000]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.node-labels.enabled = [false]
2016-12-01 19:13:24,027 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.node-labels.fs-store.impl.class = [org.apache.hadoop.yarn.nodelabels.FileSystemNodeLabelsStore]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.node-labels.fs-store.retry-policy-spec = [2000, 500]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.node-labels.fs-store.root-dir = [/system/yarn/node-labels]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.address = [0.0.0.0:45454]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.admin-env = [MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.aux-services = [mapreduce_shuffle,spark_shuffle]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.aux-services.mapreduce_shuffle.class = [org.apache.hadoop.mapred.ShuffleHandler]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.aux-services.spark_shuffle.class = [org.apache.spark.network.yarn.YarnShuffleService]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.bind-host = [0.0.0.0]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.container-executor.class = [org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.container-manager.thread-count = [20]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.container-metrics.unregister-delay-ms = [10000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.container-monitor.interval-ms = [3000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.container-monitor.procfs-tree.smaps-based-rss.enabled = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.delete.debug-delay-sec = [86400]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.delete.thread-count = [4]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.disk-health-checker.interval-ms = [120000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage = [90]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.disk-health-checker.min-free-space-per-disk-mb = [1000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.disk-health-checker.min-healthy-disks = [0.25]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.docker-container-executor.exec-name = [/usr/bin/docker]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.env-whitelist = [JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,HADOOP_YARN_HOME]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.health-checker.interval-ms = [135000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.health-checker.script.timeout-ms = [60000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.hostname = [0.0.0.0]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.keytab = [/etc/security/keytabs/nm.service.keytab]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.cgroups.hierarchy = [hadoop-yarn]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.cgroups.mount = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.cgroups.strict-resource-usage = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.group = [hadoop]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.nonsecure-mode.limit-users = [true]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.nonsecure-mode.local-user = [nobody]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.nonsecure-mode.user-pattern = [^[_.A-Za-z0-9][-@_.A-Za-z0-9]{0,255}?[$]?$]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.linux-container-executor.resources-handler.class = [org.apache.hadoop.yarn.server.nodemanager.util.DefaultLCEResourcesHandler]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.local-cache.max-files-per-directory = [8192]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.local-dirs = [/hadoop/yarn/local]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.localizer.address = [0.0.0.0:8040]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.localizer.cache.cleanup.interval-ms = [600000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.localizer.cache.target-size-mb = [10240]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.localizer.client.thread-count = [5]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.localizer.fetch.thread-count = [4]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-aggregation.compression-type = [gz]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-aggregation.debug-enabled = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-aggregation.num-log-files-per-app = [30]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-aggregation.roll-monitoring-interval-seconds = [-1]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-container-debug-info.enabled = [true]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log-dirs = [/hadoop/yarn/log]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log.retain-second = [604800]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.log.retain-seconds = [10800]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.pmem-check-enabled = [true]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.principal = [nm/_HOST@HSCALE.COM]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.process-kill-wait.ms = [2000]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.recovery.dir = [/var/log/hadoop-yarn/nodemanager/recovery-state]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.recovery.enabled = [true]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.remote-app-log-dir = [/app-logs]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.remote-app-log-dir-suffix = [logs]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.resource.cpu-vcores = [8]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.resource.memory-mb = [5120]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.resource.percentage-physical-cpu-limit = [80]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.resourcemanager.minimum.version = [NONE]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.sleep-delay-before-sigkill.ms = [250]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.vmem-check-enabled = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.vmem-pmem-ratio = [2.1]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.address = [0.0.0.0:8042]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.cross-origin.enabled = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.rest-csrf.custom-header = [X-XSRF-Header]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.rest-csrf.enabled = [false]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.rest-csrf.methods-to-ignore = [GET,OPTIONS,HEAD]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.spnego-keytab-file = [/etc/security/keytabs/spnego.service.keytab]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.webapp.spnego-principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,028 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.windows-container.cpu-limit.enabled = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.nodemanager.windows-container.memory-limit.enabled = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.address = [hscale-dev1-nn:8050]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.admin.address = [hscale-dev1-nn:8141]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.admin.client.thread-count = [1]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.am-rm-tokens.master-key-rolling-interval-secs = [86400]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.am.max-attempts = [2]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.amlauncher.thread-count = [50]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.bind-host = [0.0.0.0]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.client.thread-count = [50]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.configuration.provider-class = [org.apache.hadoop.yarn.LocalConfigurationProvider]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.connect.max-wait.ms = [900000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.connect.retry-interval.ms = [30000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs = [86400]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.container.liveness-monitor.interval-ms = [600000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.delayed.delegation-token.removal-interval-ms = [30000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.fail-fast = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.fs.state-store.num-retries = [0]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.fs.state-store.retry-interval-ms = [1000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.fs.state-store.retry-policy-spec = [2000, 500]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.fs.state-store.uri = [ ]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.ha.automatic-failover.embedded = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.ha.automatic-failover.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.ha.automatic-failover.zk-base-path = [/yarn-leader-election]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.ha.enabled = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.hostname = [hscale-dev1-nn]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.keytab = [/etc/security/keytabs/rm.service.keytab]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.leveldb-state-store.path = [/tmp/hadoop-hbase/yarn/system/rmstore]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.max-completed-applications = [10000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.max-log-aggregation-diagnostics-in-memory = [10]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.nodemanager-connect-retries = [10]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.nodemanager.minimum.version = [NONE]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.nodemanagers.heartbeat-interval-ms = [1000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.nodes.exclude-path = [/etc/hadoop/conf/yarn.exclude]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.principal = [rm/_HOST@HSCALE.COM]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.proxy-user-privileges.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.recovery.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.resource-tracker.address = [hscale-dev1-nn:8025]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.resource-tracker.client.thread-count = [50]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.scheduler.address = [hscale-dev1-nn:8030]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.scheduler.class = [org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.scheduler.client.thread-count = [50]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.scheduler.monitor.enable = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.scheduler.monitor.policies = [org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.state-store.max-completed-applications = [10000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.store.class = [org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.system-metrics-publisher.dispatcher.pool-size = [10]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.system-metrics-publisher.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.address = [hscale-dev1-nn:8088]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.cross-origin.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.delegation-token-auth-filter.enabled = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.https.address = [hscale-dev1-nn:8090]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.rest-csrf.custom-header = [X-XSRF-Header]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.rest-csrf.enabled = [false]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.rest-csrf.methods-to-ignore = [GET,OPTIONS,HEAD]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.spnego-keytab-file = [/etc/security/keytabs/spnego.service.keytab]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.webapp.spnego-principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.work-preserving-recovery.enabled = [true]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms = [10000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-acl = [world:anyone:rwcda]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-address = [hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-num-retries = [1000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-retry-interval-ms = [1000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-state-store.parent-path = [/rmstore]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.resourcemanager.zk-timeout-ms = [10000]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.scheduler.maximum-allocation-mb = [5120]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.scheduler.maximum-allocation-vcores = [8]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.scheduler.minimum-allocation-mb = [512]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.scheduler.minimum-allocation-vcores = [1]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.admin.address = [0.0.0.0:8047]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.admin.thread-count = [1]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.app-checker.class = [org.apache.hadoop.yarn.server.sharedcachemanager.RemoteAppChecker]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.checksum.algo.impl = [org.apache.hadoop.yarn.sharedcache.ChecksumSHA256Impl]
2016-12-01 19:13:24,029 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.cleaner.initial-delay-mins = [10]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.cleaner.period-mins = [1440]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.cleaner.resource-sleep-ms = [0]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.client-server.address = [0.0.0.0:8045]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.client-server.thread-count = [50]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.enabled = [false]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.nested-level = [3]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.nm.uploader.replication.factor = [10]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.nm.uploader.thread-count = [20]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.root-dir = [/sharedcache]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.store.class = [org.apache.hadoop.yarn.server.sharedcachemanager.store.InMemorySCMStore]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.store.in-memory.check-period-mins = [720]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.store.in-memory.initial-delay-mins = [10]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.store.in-memory.staleness-period-mins = [10080]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.uploader.server.address = [0.0.0.0:8046]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.uploader.server.thread-count = [50]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.sharedcache.webapp.address = [0.0.0.0:8788]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.address = [hscale-dev1-nn:10200]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.bind-host = [0.0.0.0]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.client.best-effort = [false]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.client.max-retries = [30]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.client.retry-interval-ms = [1000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.enabled = [true]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.active-dir = [/ats/active/]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.cleaner-interval-seconds = [3600]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.done-dir = [/ats/done/]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.retain-seconds = [604800]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.scan-interval-seconds = [60]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.entity-group-fs-store.summary-store = [org.apache.hadoop.yarn.server.timeline.RollingLevelDBTimelineStore]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.generic-application-history.max-applications = [10000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.generic-application-history.save-non-am-container-meta-info = [false]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.generic-application-history.store-class = [org.apache.hadoop.yarn.server.applicationhistoryservice.NullApplicationHistoryStore]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.handler-thread-count = [10]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.hostname = [0.0.0.0]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.http-authentication.kerberos.keytab = [/etc/security/keytabs/spnego.service.keytab]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.http-authentication.kerberos.principal = [HTTP/_HOST@HSCALE.COM]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.http-authentication.simple.anonymous.allowed = [true]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.http-authentication.type = [kerberos]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.keytab = [/etc/security/keytabs/yarn.service.keytab]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-state-store.path = [/hadoop/yarn/timeline]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-timeline-store.path = [/hadoop/yarn/timeline]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-timeline-store.read-cache-size = [104857600]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-timeline-store.start-time-read-cache-size = [10000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-timeline-store.start-time-write-cache-size = [10000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.leveldb-timeline-store.ttl-interval-ms = [300000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.principal = [yarn/_HOST@HSCALE.COM]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.recovery.enabled = [true]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.state-store-class = [org.apache.hadoop.yarn.server.timeline.recovery.LeveldbTimelineStateStore]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.store-class = [org.apache.hadoop.yarn.server.timeline.EntityGroupFSTimelineStore]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.ttl-enable = [true]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.ttl-ms = [2678400000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.version = [1.5]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.webapp.address = [hscale-dev1-nn:8188]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.webapp.https.address = [hscale-dev1-nn:8190]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.webapp.rest-csrf.custom-header = [X-XSRF-Header]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.webapp.rest-csrf.enabled = [false]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  yarn.timeline-service.webapp.rest-csrf.methods-to-ignore = [GET,OPTIONS,HEAD]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  zookeeper.session.timeout = [120000]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  zookeeper.znode.acl.parent = [acl]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  zookeeper.znode.parent = [/hbase-secure]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] splice.config:  zookeeper.znode.rootserver = [root-region-server]
2016-12-01 19:13:24,030 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.RecoverableZooKeeper: Process identifier=spliceconnection connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:13:24,031 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=spliceconnection0x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:13:24,037 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn1:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:13:24,043 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn1/10.60.70.11:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:13:24,044 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn1/10.60.70.11:2181, initiating session
2016-12-01 19:13:24,049 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] hbase.ZkTimestampSource: Creating the TimestampClient...
2016-12-01 19:13:24,051 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x3f7d5269 connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:13:24,051 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x3f7d52690x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:13:24,052 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:13:24,052 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn2/10.60.70.12:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:13:24,053 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn2/10.60.70.12:2181, initiating session
2016-12-01 19:13:24,057 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn1/10.60.70.11:2181, sessionid = 0x158ba9a257c0014, negotiated timeout = 120000
2016-12-01 19:13:24,071 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn2/10.60.70.12:2181, sessionid = 0x258ba9a256f0010, negotiated timeout = 120000
2016-12-01 19:13:24,131 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] impl.TimestampClient: TimestampClient on region server successfully registered with JMX
2016-12-01 19:13:24,534 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionServerLifecycleObserver was loaded successfully with priority (536870912).
2016-12-01 19:13:24,535 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.HRegionServer: reportForDuty to master=hscale-dev1-nn,16000,1480599517694 with port=16020, startcode=1480599802236
2016-12-01 19:13:24,583 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] hfile.CacheConfig: Allocating LruBlockCache size=1.20 GB, blockSize=64 KB
2016-12-01 19:13:24,591 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1322544, freeSize=1287167696, maxSize=1288490240, heapSize=1322544, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:13:24,692 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.BoundedRegionGroupingProvider
2016-12-01 19:13:24,692 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.RegionGroupingProvider: Instantiating RegionGroupingStrategy of type class org.apache.hadoop.hbase.wal.RegionGroupingProvider$IdentityGroupingStrategy
2016-12-01 19:13:24,693 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:24,831 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null0, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,050 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 143 ms, current pipeline: []
2016-12-01 19:13:25,051 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null0.1480599804831
2016-12-01 19:13:25,064 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,070 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null1, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,127 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 39 ms, current pipeline: []
2016-12-01 19:13:25,127 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null1.1480599805070
2016-12-01 19:13:25,128 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,133 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null2, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,190 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 41 ms, current pipeline: []
2016-12-01 19:13:25,190 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null2.1480599805133
2016-12-01 19:13:25,192 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,197 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null3, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,252 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 41 ms, current pipeline: []
2016-12-01 19:13:25,253 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null3.1480599805197
2016-12-01 19:13:25,258 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,266 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null4, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,326 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 44 ms, current pipeline: []
2016-12-01 19:13:25,326 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null4.1480599805266
2016-12-01 19:13:25,329 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,339 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null5, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,398 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 41 ms, current pipeline: []
2016-12-01 19:13:25,398 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null5.1480599805339
2016-12-01 19:13:25,399 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,403 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null6, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,462 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 42 ms, current pipeline: []
2016-12-01 19:13:25,462 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null6.1480599805403
2016-12-01 19:13:25,463 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,471 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null7, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,552 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 57 ms, current pipeline: []
2016-12-01 19:13:25,552 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null7.1480599805471
2016-12-01 19:13:25,554 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,564 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null8, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,632 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 50 ms, current pipeline: []
2016-12-01 19:13:25,632 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null8.1480599805564
2016-12-01 19:13:25,633 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,639 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null9, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,705 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 50 ms, current pipeline: []
2016-12-01 19:13:25,705 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null9.1480599805639
2016-12-01 19:13:25,706 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,715 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null10, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,776 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 42 ms, current pipeline: []
2016-12-01 19:13:25,776 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null10.1480599805715
2016-12-01 19:13:25,777 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,786 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null11, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,858 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 50 ms, current pipeline: []
2016-12-01 19:13:25,858 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null11.1480599805786
2016-12-01 19:13:25,859 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,869 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null12, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:25,929 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 42 ms, current pipeline: []
2016-12-01 19:13:25,930 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null12.1480599805869
2016-12-01 19:13:25,931 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:25,946 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null13, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:26,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 49 ms, current pipeline: []
2016-12-01 19:13:26,018 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null13.1480599805946
2016-12-01 19:13:26,019 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:26,026 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null14, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:26,091 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 50 ms, current pipeline: []
2016-12-01 19:13:26,091 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null14.1480599806026
2016-12-01 19:13:26,092 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:13:26,103 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236.null15, suffix=, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:13:26,163 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: Slow sync cost: 41 ms, current pipeline: []
2016-12-01 19:13:26,163 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236.null15.1480599806103
2016-12-01 19:13:26,164 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] wal.BoundedRegionGroupingProvider: Configured to run with 16 delegate WAL providers.
2016-12-01 19:13:26,175 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.MetricsRegionServerWrapperImpl: Computing regionserver metrics every 5000 milliseconds
2016-12-01 19:13:26,190 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.ReplicationSourceManager: Current list of replicators: [hscale-dev1-dn1,16020,1480599802236] other RSs: [hscale-dev1-dn1,16020,1480599802236]
2016-12-01 19:13:26,235 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] regionserver.SplitLogWorker: SplitLogWorker hscale-dev1-dn1,16020,1480599802236 starting
2016-12-01 19:13:26,235 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.HeapMemoryManager: Starting HeapMemoryTuner chore.
2016-12-01 19:13:26,237 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] regionserver.HRegionServer: Serving as hscale-dev1-dn1,16020,1480599802236, RpcServer on hscale-dev1-dn1/10.60.70.11:16020, sessionid=0x258ba9a256f000e
2016-12-01 19:13:26,243 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020] quotas.RegionServerQuotaManager: Quota support disabled
2016-12-01 19:13:26,267 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164..meta.1480599549977.meta
2016-12-01 19:13:26,303 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164..meta.1480599549977.meta, length=8111
2016-12-01 19:13:26,303 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:13:26,320 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164..meta.1480599549977.meta
2016-12-01 19:13:26,324 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164..meta.1480599549977.meta after 4ms
2016-12-01 19:13:26,485 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/0000000000000000763.temp region=1588230740
2016-12-01 19:13:26,505 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:13:26,599 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/0000000000000000763.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/0000000000000000848
2016-12-01 19:13:26,599 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 44 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164..meta.1480599549977.meta, length=8111, corrupted=false, progress failed=false
2016-12-01 19:13:26,615 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164..meta.1480599549977.meta to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:13:26,615 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@38bae511 in 345ms
2016-12-01 19:13:35,047 INFO  [main-EventThread] zookeeper.ZKLeaderManager: Leader change, but no new leader found
2016-12-01 19:13:35,066 INFO  [main-EventThread] zookeeper.ZKLeaderManager: Found new leader for znode: /hbase-secure/tokenauth/keymaster
2016-12-01 19:13:39,425 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=30, waitTime=5
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=30, waitTime=5
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=30, waitTime=5
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:13:39,636 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:39,942 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:40,447 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:41,461 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:41,466 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599819418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599819418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599819418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:40 IST 2016, RpcRetryingCaller{globalStartTime=1480599819418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599819418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:42,966 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:43,170 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:43,475 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:43,985 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:44,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:44,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:44,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:44,991 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:44,995 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:42 IST 2016, RpcRetryingCaller{globalStartTime=1480599822965, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:43 IST 2016, RpcRetryingCaller{globalStartTime=1480599822965, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:43 IST 2016, RpcRetryingCaller{globalStartTime=1480599822965, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:43 IST 2016, RpcRetryingCaller{globalStartTime=1480599822965, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599822965, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:45,438 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:46,448 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:46,452 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599824418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599824418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599824418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:45 IST 2016, RpcRetryingCaller{globalStartTime=1480599824418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:46 IST 2016, RpcRetryingCaller{globalStartTime=1480599824418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:49,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:49,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:49,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:50,436 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:51,445 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:51,453 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599829418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599829418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599829418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:50 IST 2016, RpcRetryingCaller{globalStartTime=1480599829418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:51 IST 2016, RpcRetryingCaller{globalStartTime=1480599829418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:54,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:54,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:54,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:54,997 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:55,202 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:55,435 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:55,507 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:56,017 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:56,447 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:56,452 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599834418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599834418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599834418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:55 IST 2016, RpcRetryingCaller{globalStartTime=1480599834418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:56 IST 2016, RpcRetryingCaller{globalStartTime=1480599834418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:57,025 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:57,029 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:55 IST 2016, RpcRetryingCaller{globalStartTime=1480599834997, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:55 IST 2016, RpcRetryingCaller{globalStartTime=1480599834997, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:55 IST 2016, RpcRetryingCaller{globalStartTime=1480599834997, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:56 IST 2016, RpcRetryingCaller{globalStartTime=1480599834997, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:57 IST 2016, RpcRetryingCaller{globalStartTime=1480599834997, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:13:59,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:59,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:13:59,930 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:00,435 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:01,449 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:01,453 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:13:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599839418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599839418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:13:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599839418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:00 IST 2016, RpcRetryingCaller{globalStartTime=1480599839418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:01 IST 2016, RpcRetryingCaller{globalStartTime=1480599839418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:04,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:04,626 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:04,931 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:05,437 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:06,444 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:06,452 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599844418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599844418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599844418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:05 IST 2016, RpcRetryingCaller{globalStartTime=1480599844418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:06 IST 2016, RpcRetryingCaller{globalStartTime=1480599844418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:07,031 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:07,235 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:07,540 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:08,045 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:09,049 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:09,052 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:07 IST 2016, RpcRetryingCaller{globalStartTime=1480599847030, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:07 IST 2016, RpcRetryingCaller{globalStartTime=1480599847030, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:07 IST 2016, RpcRetryingCaller{globalStartTime=1480599847030, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:08 IST 2016, RpcRetryingCaller{globalStartTime=1480599847030, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599847030, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:09,418 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:09,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:09,927 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:10,434 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:11,447 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:11,451 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599849418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599849418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599849418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:10 IST 2016, RpcRetryingCaller{globalStartTime=1480599849418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:11 IST 2016, RpcRetryingCaller{globalStartTime=1480599849418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:14,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:14,624 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:14,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:15,438 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:16,452 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:16,457 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599854418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599854418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599854418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:15 IST 2016, RpcRetryingCaller{globalStartTime=1480599854418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:16 IST 2016, RpcRetryingCaller{globalStartTime=1480599854418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:19,054 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:19,261 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:19,418 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:19,570 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:19,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:19,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:20,078 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:20,436 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:21,089 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:21,093 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859052, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859052, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859052, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:20 IST 2016, RpcRetryingCaller{globalStartTime=1480599859052, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:21 IST 2016, RpcRetryingCaller{globalStartTime=1480599859052, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:21,445 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:21,449 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599859418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:20 IST 2016, RpcRetryingCaller{globalStartTime=1480599859418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:21 IST 2016, RpcRetryingCaller{globalStartTime=1480599859418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:24,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:24,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:24,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:25,432 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:26,443 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:26,447 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599864418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599864418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599864418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:25 IST 2016, RpcRetryingCaller{globalStartTime=1480599864418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:26 IST 2016, RpcRetryingCaller{globalStartTime=1480599864418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:29,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:29,628 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:29,932 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:30,438 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:31,452 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:31,455 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599869418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599869418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599869418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:30 IST 2016, RpcRetryingCaller{globalStartTime=1480599869418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:31 IST 2016, RpcRetryingCaller{globalStartTime=1480599869418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:34,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:34,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:34,931 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:35,436 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:36,449 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:36,452 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599874418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599874418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599874418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:35 IST 2016, RpcRetryingCaller{globalStartTime=1480599874418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:14:36 IST 2016, RpcRetryingCaller{globalStartTime=1480599874418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:14:39,516 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:39,729 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:14:41,584 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599879418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=111, waitTime=1
Thu Dec 01 19:14:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599879418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=113, waitTime=0
Thu Dec 01 19:14:40 IST 2016, RpcRetryingCaller{globalStartTime=1480599879418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=115, waitTime=1
Thu Dec 01 19:14:40 IST 2016, RpcRetryingCaller{globalStartTime=1480599879418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=117, waitTime=0
Thu Dec 01 19:14:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599879418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:14:41,633 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=124, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=124, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=124, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:42,160 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=126, waitTime=1
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=126, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=126, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:43,194 WARN  [pool-10-thread-1] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=128, waitTime=3
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=128, waitTime=3
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=128, waitTime=3
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:43,214 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599881094, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599881094, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599881094, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:42 IST 2016, RpcRetryingCaller{globalStartTime=1480599881094, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:43 IST 2016, RpcRetryingCaller{globalStartTime=1480599881094, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getProcedureResult(MasterProtos.java:58728)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getProcedureResult(ConnectionManager.java:1951)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture$2.call(HBaseAdmin.java:4387)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture$2.call(HBaseAdmin.java:4384)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:14:45,732 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] regionserver.RSRpcServices: Open hbase:meta,,1.1588230740
2016-12-01 19:14:45,759 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] wal.WALFactory: Instantiating WALProvider of type class org.apache.hadoop.hbase.wal.DefaultWALProvider
2016-12-01 19:14:45,767 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] wal.FSHLog: WAL configuration: blocksize=128 MB, rollsize=121.60 MB, prefix=hscale-dev1-dn1%2C16020%2C1480599802236..meta, suffix=.meta, logDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236, archiveDir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/oldWALs
2016-12-01 19:14:45,842 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] wal.FSHLog: Slow sync cost: 40 ms, current pipeline: []
2016-12-01 19:14:45,842 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] wal.FSHLog: New WAL /apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599802236/hscale-dev1-dn1%2C16020%2C1480599802236..meta.1480599885767.meta
2016-12-01 19:14:45,870 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:45,873 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:45,891 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:45,896 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:45,908 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:45,913 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:45,925 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:45,928 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:45,931 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:45,945 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of hbase:meta successfully.
2016-12-01 19:14:45,985 INFO  [StoreOpener-1588230740-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=0, currentSize=1322544, freeSize=1287167696, maxSize=1288490240, heapSize=1322544, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:45,994 INFO  [StoreOpener-1588230740-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:46,051 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/recovered.edits/0000000000000000848
2016-12-01 19:14:46,091 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.HRegion: Started memstore flush for hbase:meta,,1.1588230740, current region memstore size 29.86 KB, and 1/1 column families' memstores are being flushed.; wal is null, using passed sequenceid=848
2016-12-01 19:14:46,201 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=848, memsize=29.9 K, hasBloomFilter=false, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/.tmp/0439825b78be415ebd7262553a42d3c3
2016-12-01 19:14:46,257 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/info/0439825b78be415ebd7262553a42d3c3, entries=126, sequenceid=848, filesize=19.1 K
2016-12-01 19:14:46,258 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.HRegion: Finished memstore flush of ~29.86 KB/30576, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 168ms, sequenceid=848, compaction requested=false; wal=null
2016-12-01 19:14:46,350 INFO  [RS_OPEN_META-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 1588230740; next sequenceid=849
2016-12-01 19:14:46,372 INFO  [PostOpenDeployTasks:1588230740] regionserver.HRegionServer: Post open deploy tasks for hbase:meta,,1.1588230740
2016-12-01 19:14:46,372 INFO  [PostOpenDeployTasks:1588230740] zookeeper.MetaTableLocator: Setting hbase:meta region location in ZooKeeper as hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:46,466 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599884417, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599884417, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:44 IST 2016, RpcRetryingCaller{globalStartTime=1480599884417, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:45 IST 2016, RpcRetryingCaller{globalStartTime=1480599884417, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=137, waitTime=1
Thu Dec 01 19:14:46 IST 2016, RpcRetryingCaller{globalStartTime=1480599884417, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:14:46,924 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null14.1480599548764
2016-12-01 19:14:46,958 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null14.1480599548764, length=380
2016-12-01 19:14:46,958 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:46,980 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null14.1480599548764
2016-12-01 19:14:46,982 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null14.1480599548764 after 2ms
2016-12-01 19:14:47,066 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000007.temp region=720f27c20e300e2c5bc7b5d3b8eddcbf
2016-12-01 19:14:47,066 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:47,144 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000007
2016-12-01 19:14:47,145 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null14.1480599548764, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:47,159 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null14.1480599548764 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:47,159 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@68e41cf4 in 233ms
2016-12-01 19:14:47,527 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null13.1480599548676
2016-12-01 19:14:47,547 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null13.1480599548676, length=380
2016-12-01 19:14:47,547 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:47,565 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null13.1480599548676
2016-12-01 19:14:47,566 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null13.1480599548676 after 1ms
2016-12-01 19:14:47,652 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000007.temp region=64d62e6d820b30bc90af9615c4188533
2016-12-01 19:14:47,652 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:47,734 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000007
2016-12-01 19:14:47,734 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null13.1480599548676, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:47,755 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null13.1480599548676 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:47,755 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@73bb2bc8 in 228ms
2016-12-01 19:14:48,465 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null8.1480599548264
2016-12-01 19:14:48,496 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null8.1480599548264, length=415
2016-12-01 19:14:48,496 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:48,514 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null8.1480599548264
2016-12-01 19:14:48,516 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null8.1480599548264 after 2ms
2016-12-01 19:14:48,603 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-0] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000134.temp region=bb61d57cfdba9c2670d6050fc59581c6
2016-12-01 19:14:48,604 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:48,674 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000134.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000134
2016-12-01 19:14:48,674 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null8.1480599548264, length=415, corrupted=false, progress failed=false
2016-12-01 19:14:48,690 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null8.1480599548264 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:48,690 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1773a04d in 225ms
2016-12-01 19:14:49,019 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null6.1480599548126
2016-12-01 19:14:49,048 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null6.1480599548126, length=386
2016-12-01 19:14:49,048 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:49,063 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null6.1480599548126
2016-12-01 19:14:49,064 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null6.1480599548126 after 1ms
2016-12-01 19:14:49,119 WARN  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Unable to connect to the master to check the last flushed sequence id
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=11, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.getLastFlushedSequenceId(RegionServerStatusProtos.java:9018)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getLastSequenceId(HRegionServer.java:2303)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:338)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
    at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
    at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=11, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 11 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=11, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:49,145 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000102.temp region=2ca0f5757a70a75a2dfac9e2b8e8de14
2016-12-01 19:14:49,146 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:49,231 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000102.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000102
2016-12-01 19:14:49,232 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null6.1480599548126, length=386, corrupted=false, progress failed=false
2016-12-01 19:14:49,251 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null6.1480599548126 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:49,251 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7feedb22 in 231ms
2016-12-01 19:14:49,893 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null11.1480599548496
2016-12-01 19:14:49,924 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null11.1480599548496, length=380
2016-12-01 19:14:49,924 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:49,940 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null11.1480599548496
2016-12-01 19:14:49,941 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null11.1480599548496 after 1ms
2016-12-01 19:14:50,034 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/af148aa23be6b8294a12150e68bdb64f/recovered.edits/0000000000000000005.temp region=af148aa23be6b8294a12150e68bdb64f
2016-12-01 19:14:50,034 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:50,117 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/af148aa23be6b8294a12150e68bdb64f/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/af148aa23be6b8294a12150e68bdb64f/recovered.edits/0000000000000000005
2016-12-01 19:14:50,118 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null11.1480599548496, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:50,132 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null11.1480599548496 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:50,132 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@4eec6cfc in 238ms
2016-12-01 19:14:50,552 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null4.1480599547968
2016-12-01 19:14:50,571 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null4.1480599547968, length=367
2016-12-01 19:14:50,571 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:50,589 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null4.1480599547968
2016-12-01 19:14:50,590 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null4.1480599547968 after 1ms
2016-12-01 19:14:50,650 WARN  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Unable to connect to the master to check the last flushed sequence id
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=13, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.getLastFlushedSequenceId(RegionServerStatusProtos.java:9018)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getLastSequenceId(HRegionServer.java:2303)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:338)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
    at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
    at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=13, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 11 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=13, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:50,671 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6881792bca9cea2dad317d0d9df13025/recovered.edits/0000000000000000125.temp region=6881792bca9cea2dad317d0d9df13025
2016-12-01 19:14:50,672 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:50,742 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6881792bca9cea2dad317d0d9df13025/recovered.edits/0000000000000000125.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6881792bca9cea2dad317d0d9df13025/recovered.edits/0000000000000000125
2016-12-01 19:14:50,742 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null4.1480599547968, length=367, corrupted=false, progress failed=false
2016-12-01 19:14:50,762 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null4.1480599547968 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:50,762 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5a9867d7 in 210ms
2016-12-01 19:14:51,274 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null15.1480599548837
2016-12-01 19:14:51,292 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null15.1480599548837, length=91
2016-12-01 19:14:51,292 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:51,310 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null15.1480599548837
2016-12-01 19:14:51,311 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null15.1480599548837 after 1ms
2016-12-01 19:14:51,373 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:51,374 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null15.1480599548837, length=91, corrupted=false, progress failed=false
2016-12-01 19:14:51,392 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null15.1480599548837 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:51,392 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5ec6aaed in 118ms
2016-12-01 19:14:51,446 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=148, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=148, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=148, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:51,463 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599889418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599889418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:49 IST 2016, RpcRetryingCaller{globalStartTime=1480599889418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:50 IST 2016, RpcRetryingCaller{globalStartTime=1480599889418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:51 IST 2016, RpcRetryingCaller{globalStartTime=1480599889418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:14:52,202 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null9.1480599548335
2016-12-01 19:14:52,236 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null9.1480599548335, length=421
2016-12-01 19:14:52,236 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:52,250 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null9.1480599548335
2016-12-01 19:14:52,252 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null9.1480599548335 after 2ms
2016-12-01 19:14:52,320 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-1] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/15fcdda3f016fb79c89b1e7284518979/recovered.edits/0000000000000000150.temp region=15fcdda3f016fb79c89b1e7284518979
2016-12-01 19:14:52,321 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:52,400 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/15fcdda3f016fb79c89b1e7284518979/recovered.edits/0000000000000000150.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/15fcdda3f016fb79c89b1e7284518979/recovered.edits/0000000000000000150
2016-12-01 19:14:52,400 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null9.1480599548335, length=421, corrupted=false, progress failed=false
2016-12-01 19:14:52,417 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null9.1480599548335 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:52,417 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@64f00fe9 in 214ms
2016-12-01 19:14:53,197 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null7.1480599548190
2016-12-01 19:14:53,225 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null7.1480599548190, length=380
2016-12-01 19:14:53,225 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:53,242 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null7.1480599548190
2016-12-01 19:14:53,244 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null7.1480599548190 after 2ms
2016-12-01 19:14:53,327 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000005.temp region=04d5ffef435a1e4041af8895340de6ae
2016-12-01 19:14:53,327 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:53,421 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000005
2016-12-01 19:14:53,422 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null7.1480599548190, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:53,465 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null7.1480599548190 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:53,465 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@2097386b in 267ms
2016-12-01 19:14:53,867 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null12.1480599548618
2016-12-01 19:14:53,895 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null12.1480599548618, length=401
2016-12-01 19:14:53,895 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:53,913 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null12.1480599548618
2016-12-01 19:14:53,915 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null12.1480599548618 after 2ms
2016-12-01 19:14:53,984 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-1] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000432.temp region=fa9ccd67af9c529bf1fab5a2893825af
2016-12-01 19:14:53,984 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:54,066 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000432.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000432
2016-12-01 19:14:54,066 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null12.1480599548618, length=401, corrupted=false, progress failed=false
2016-12-01 19:14:54,082 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null12.1480599548618 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:54,083 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5394c301 in 216ms
2016-12-01 19:14:54,656 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null3.1480599547840
2016-12-01 19:14:54,678 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null3.1480599547840, length=380
2016-12-01 19:14:54,678 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:54,695 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null3.1480599547840
2016-12-01 19:14:54,696 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null3.1480599547840 after 1ms
2016-12-01 19:14:54,775 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000005.temp region=aa3bf9854a2e6a06cae52b1cfa2d6754
2016-12-01 19:14:54,776 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:54,849 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000005
2016-12-01 19:14:54,850 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null3.1480599547840, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:54,872 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null3.1480599547840 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:54,872 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@11750287 in 215ms
2016-12-01 19:14:55,211 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null2.1480599547736
2016-12-01 19:14:55,238 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null2.1480599547736, length=389
2016-12-01 19:14:55,238 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:55,255 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null2.1480599547736
2016-12-01 19:14:55,256 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null2.1480599547736 after 1ms
2016-12-01 19:14:55,322 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000005.temp region=045c57a37dbbdf8427895346f2ea2e0c
2016-12-01 19:14:55,322 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:55,395 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000005
2016-12-01 19:14:55,395 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599543164-splitting/hscale-dev1-dn2%2C16020%2C1480599543164.null2.1480599547736, length=389, corrupted=false, progress failed=false
2016-12-01 19:14:55,412 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599543164-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599543164.null2.1480599547736 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:55,412 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@3b2930ec in 201ms
2016-12-01 19:14:55,685 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,,1480593676447.a5ba95da3316cedaf5c5175e83c8b1bb.
2016-12-01 19:14:55,692 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Open ENCOUNTER,,1479977632429.15fcdda3f016fb79c89b1e7284518979.
2016-12-01 19:14:55,714 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Open SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.
2016-12-01 19:14:55,715 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:55,715 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:55,731 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f.
2016-12-01 19:14:55,732 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:55,732 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:55,734 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:55,734 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:55,734 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:55,734 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:55,747 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:55,747 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:55,748 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:55,748 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:55,748 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:55,749 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:55,749 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:55,750 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:55,751 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:55,751 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:55,751 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:55,751 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:55,756 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] compress.CodecPool: Got brand-new compressor [.snappy]
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:55,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:55,876 INFO  [StoreOpener-a5ba95da3316cedaf5c5175e83c8b1bb-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:55,876 INFO  [StoreOpener-a5ba95da3316cedaf5c5175e83c8b1bb-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:55,878 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of SYSTEM.STATS successfully.
2016-12-01 19:14:55,882 INFO  [StoreOpener-a5ba95da3316cedaf5c5175e83c8b1bb-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:55,882 INFO  [StoreOpener-a5ba95da3316cedaf5c5175e83c8b1bb-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:55,884 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] parallel.ThreadPoolManager: Creating new pool for hscale-dev1-dn1,16020,1480599802236-index-writer
2016-12-01 19:14:55,884 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:14:55,889 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] parallel.ThreadPoolManager: Creating new pool for hscale-dev1-dn1,16020,1480599802236-recovery-writer
2016-12-01 19:14:55,891 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x66a999e3 connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:14:55,891 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x66a999e30x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:14:55,892 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1-SendThread(hscale-dev1-dn2:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:14:55,893 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn2/10.60.70.12:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:14:55,895 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SYSTEM.STATS successfully.
2016-12-01 19:14:55,895 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn2/10.60.70.12:2181, initiating session
2016-12-01 19:14:55,899 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:14:55,903 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:14:55,913 INFO  [StoreOpener-562458d14118a6f198dad32d8a0d0b12-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:55,913 INFO  [StoreOpener-562458d14118a6f198dad32d8a0d0b12-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:55,918 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn2/10.60.70.12:2181, sessionid = 0x258ba9a256f0014, negotiated timeout = 120000
2016-12-01 19:14:55,924 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of ENCOUNTER successfully.
2016-12-01 19:14:55,924 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:14:55,924 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of ENCOUNTER successfully.
2016-12-01 19:14:55,925 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:14:55,925 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:14:55,934 INFO  [StoreOpener-15fcdda3f016fb79c89b1e7284518979-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:55,934 INFO  [StoreOpener-15fcdda3f016fb79c89b1e7284518979-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:55,946 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.STATS/562458d14118a6f198dad32d8a0d0b12/recovered.edits/0000000000000000186
2016-12-01 19:14:55,964 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined a5ba95da3316cedaf5c5175e83c8b1bb; next sequenceid=12
2016-12-01 19:14:55,969 INFO  [PostOpenDeployTasks:a5ba95da3316cedaf5c5175e83c8b1bb] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,,1480593676447.a5ba95da3316cedaf5c5175e83c8b1bb.
2016-12-01 19:14:56,016 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/15fcdda3f016fb79c89b1e7284518979/recovered.edits/0000000000000000150
2016-12-01 19:14:56,049 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 562458d14118a6f198dad32d8a0d0b12; next sequenceid=187
2016-12-01 19:14:56,072 INFO  [PostOpenDeployTasks:a5ba95da3316cedaf5c5175e83c8b1bb] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,,1480593676447.a5ba95da3316cedaf5c5175e83c8b1bb. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,073 INFO  [PostOpenDeployTasks:562458d14118a6f198dad32d8a0d0b12] regionserver.HRegionServer: Post open deploy tasks for SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.
2016-12-01 19:14:56,082 INFO  [PostOpenDeployTasks:562458d14118a6f198dad32d8a0d0b12] hbase.MetaTableAccessor: Updated row SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,143 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 15fcdda3f016fb79c89b1e7284518979; next sequenceid=151
2016-12-01 19:14:56,143 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:14:56,163 INFO  [PostOpenDeployTasks:15fcdda3f016fb79c89b1e7284518979] regionserver.HRegionServer: Post open deploy tasks for ENCOUNTER,,1479977632429.15fcdda3f016fb79c89b1e7284518979.
2016-12-01 19:14:56,164 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:56,164 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:56,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:56,188 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:56,188 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null6.1480599549694
2016-12-01 19:14:56,188 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:56,188 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:56,190 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:56,190 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:56,190 INFO  [PostOpenDeployTasks:15fcdda3f016fb79c89b1e7284518979] hbase.MetaTableAccessor: Updated row ENCOUNTER,,1479977632429.15fcdda3f016fb79c89b1e7284518979. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,194 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:56,205 INFO  [StoreOpener-af148aa23be6b8294a12150e68bdb64f-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:56,205 INFO  [PriorityRpcServer.handler=14,queue=0,port=16020] regionserver.RSRpcServices: Open FMD,,1479977442279.6881792bca9cea2dad317d0d9df13025.
2016-12-01 19:14:56,205 INFO  [StoreOpener-af148aa23be6b8294a12150e68bdb64f-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:56,212 INFO  [StoreOpener-af148aa23be6b8294a12150e68bdb64f-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:56,213 INFO  [StoreOpener-af148aa23be6b8294a12150e68bdb64f-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:56,220 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/af148aa23be6b8294a12150e68bdb64f/recovered.edits/0000000000000000005
2016-12-01 19:14:56,223 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null6.1480599549694, length=370
2016-12-01 19:14:56,223 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:56,242 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null6.1480599549694
2016-12-01 19:14:56,243 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null6.1480599549694 after 1ms
2016-12-01 19:14:56,291 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:56,291 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:56,314 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:56,320 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of FMD successfully.
2016-12-01 19:14:56,320 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:14:56,320 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of FMD successfully.
2016-12-01 19:14:56,320 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of FMD successfully.
2016-12-01 19:14:56,320 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:14:56,330 INFO  [StoreOpener-6881792bca9cea2dad317d0d9df13025-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:56,330 INFO  [StoreOpener-6881792bca9cea2dad317d0d9df13025-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:56,337 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined af148aa23be6b8294a12150e68bdb64f; next sequenceid=6
2016-12-01 19:14:56,343 INFO  [PostOpenDeployTasks:af148aa23be6b8294a12150e68bdb64f] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f.
2016-12-01 19:14:56,349 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6881792bca9cea2dad317d0d9df13025/recovered.edits/0000000000000000125
2016-12-01 19:14:56,353 INFO  [PostOpenDeployTasks:af148aa23be6b8294a12150e68bdb64f] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,416 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6adc2b33f62f4c61e99b85dff151f1d5/recovered.edits/0000000000000000177.temp region=6adc2b33f62f4c61e99b85dff151f1d5
2016-12-01 19:14:56,416 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:56,447 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599894418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599894418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:54 IST 2016, RpcRetryingCaller{globalStartTime=1480599894418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:55 IST 2016, RpcRetryingCaller{globalStartTime=1480599894418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:56 IST 2016, RpcRetryingCaller{globalStartTime=1480599894418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:14:56,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 6881792bca9cea2dad317d0d9df13025; next sequenceid=126
2016-12-01 19:14:56,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:14:56,471 INFO  [PostOpenDeployTasks:6881792bca9cea2dad317d0d9df13025] regionserver.HRegionServer: Post open deploy tasks for FMD,,1479977442279.6881792bca9cea2dad317d0d9df13025.
2016-12-01 19:14:56,496 INFO  [PostOpenDeployTasks:6881792bca9cea2dad317d0d9df13025] hbase.MetaTableAccessor: Updated row FMD,,1479977442279.6881792bca9cea2dad317d0d9df13025. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,502 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6adc2b33f62f4c61e99b85dff151f1d5/recovered.edits/0000000000000000177.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6adc2b33f62f4c61e99b85dff151f1d5/recovered.edits/0000000000000000177
2016-12-01 19:14:56,502 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null6.1480599549694, length=370, corrupted=false, progress failed=false
2016-12-01 19:14:56,518 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null6.1480599549694 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:56,518 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@48f24983 in 330ms
2016-12-01 19:14:56,829 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null1.1480599552140
2016-12-01 19:14:56,856 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null1.1480599552140, length=365
2016-12-01 19:14:56,856 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:56,875 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null1.1480599552140
2016-12-01 19:14:56,876 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null1.1480599552140 after 1ms
2016-12-01 19:14:56,988 WARN  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Unable to connect to the master to check the last flushed sequence id
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=23, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.getLastFlushedSequenceId(RegionServerStatusProtos.java:9018)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getLastSequenceId(HRegionServer.java:2303)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:338)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
    at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
    at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=23, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 11 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=23, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:14:57,015 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.FUNCTION/74957f0a078e8febe4e1a4a17d749db7/recovered.edits/0000000000000000099.temp region=74957f0a078e8febe4e1a4a17d749db7
2016-12-01 19:14:57,015 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:57,134 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.FUNCTION/74957f0a078e8febe4e1a4a17d749db7/recovered.edits/0000000000000000099.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.FUNCTION/74957f0a078e8febe4e1a4a17d749db7/recovered.edits/0000000000000000099
2016-12-01 19:14:57,134 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null1.1480599552140, length=365, corrupted=false, progress failed=false
2016-12-01 19:14:57,151 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null1.1480599552140 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:57,151 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@223cfaa2 in 322ms
2016-12-01 19:14:57,256 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] regionserver.RSRpcServices: Open splice:16,,1480593690479.6dc8f41e5575f66fe7f0f0d0884400ee.
2016-12-01 19:14:57,291 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:57,291 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:57,303 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:57,303 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:57,306 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:57,306 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:57,306 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:57,308 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:57,308 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:57,311 INFO  [StoreOpener-6dc8f41e5575f66fe7f0f0d0884400ee-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:14:57,311 INFO  [StoreOpener-6dc8f41e5575f66fe7f0f0d0884400ee-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:14:57,321 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/16/6dc8f41e5575f66fe7f0f0d0884400ee/recovered.edits/0000000000000000004
2016-12-01 19:14:57,344 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/16/6dc8f41e5575f66fe7f0f0d0884400ee/recovered.edits/0000000000000000007
2016-12-01 19:14:57,356 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/16/6dc8f41e5575f66fe7f0f0d0884400ee/recovered.edits/0000000000000000010
2016-12-01 19:14:57,370 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/16/6dc8f41e5575f66fe7f0f0d0884400ee/recovered.edits/0000000000000000013
2016-12-01 19:14:57,488 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 6dc8f41e5575f66fe7f0f0d0884400ee; next sequenceid=14
2016-12-01 19:14:57,499 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null4.1480599563959
2016-12-01 19:14:57,500 INFO  [PostOpenDeployTasks:6dc8f41e5575f66fe7f0f0d0884400ee] regionserver.HRegionServer: Post open deploy tasks for splice:16,,1480593690479.6dc8f41e5575f66fe7f0f0d0884400ee.
2016-12-01 19:14:57,518 INFO  [PostOpenDeployTasks:6dc8f41e5575f66fe7f0f0d0884400ee] hbase.MetaTableAccessor: Updated row splice:16,,1480593690479.6dc8f41e5575f66fe7f0f0d0884400ee. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:57,528 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null4.1480599563959, length=588
2016-12-01 19:14:57,528 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:57,550 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null4.1480599563959
2016-12-01 19:14:57,552 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null4.1480599563959 after 2ms
2016-12-01 19:14:57,667 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000010.temp region=a5ba95da3316cedaf5c5175e83c8b1bb
2016-12-01 19:14:57,668 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:57,767 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000010.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000011
2016-12-01 19:14:57,767 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 2 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null4.1480599563959, length=588, corrupted=false, progress failed=false
2016-12-01 19:14:57,783 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null4.1480599563959 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:57,783 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@6590a86 in 284ms
2016-12-01 19:14:58,101 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null0.1480599548537
2016-12-01 19:14:58,128 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null0.1480599548537, length=91
2016-12-01 19:14:58,128 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:58,140 INFO  [PriorityRpcServer.handler=8,queue=0,port=16020] regionserver.RSRpcServices: Open SYSTEM.CATALOG,,1479977347096.c9bd4102bc0eee28e3ffb28ff04128af.
2016-12-01 19:14:58,144 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null0.1480599548537
2016-12-01 19:14:58,145 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null0.1480599548537 after 1ms
2016-12-01 19:14:58,166 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:14:58,166 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:14:58,188 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:14:58,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:14:58,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:14:58,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:14:58,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:14:58,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:14:58,190 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:14:58,205 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:58,205 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null0.1480599548537, length=91, corrupted=false, progress failed=false
2016-12-01 19:14:58,223 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null0.1480599548537 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:58,223 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1225a42b in 121ms
2016-12-01 19:14:58,268 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.MetaDataEndpointImpl: Starting Tracing-Metrics Systems
2016-12-01 19:14:58,270 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] metrics.Metrics: Initializing metrics system: phoenix
2016-12-01 19:14:58,270 WARN  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] impl.MetricsSystemImpl: HBase metrics system already initialized!
2016-12-01 19:14:58,271 WARN  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] metrics.Metrics: Phoenix metrics2/tracing sink was not started. Should be it be?
2016-12-01 19:14:58,271 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.MetaDataEndpointImpl from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:14:58,746 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null10.1480599549992
2016-12-01 19:14:58,774 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null10.1480599549992, length=388
2016-12-01 19:14:58,774 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:58,792 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null10.1480599549992
2016-12-01 19:14:58,793 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null10.1480599549992 after 1ms
2016-12-01 19:14:58,883 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-0] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/17022d5d42890169454bf30a0203da51/recovered.edits/0000000000000000178.temp region=17022d5d42890169454bf30a0203da51
2016-12-01 19:14:58,884 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:14:58,957 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/17022d5d42890169454bf30a0203da51/recovered.edits/0000000000000000178.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/17022d5d42890169454bf30a0203da51/recovered.edits/0000000000000000178
2016-12-01 19:14:58,958 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null10.1480599549992, length=388, corrupted=false, progress failed=false
2016-12-01 19:14:58,977 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null10.1480599549992 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:58,977 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@70ebe2e7 in 230ms
2016-12-01 19:14:59,419 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null0.1480599563361
2016-12-01 19:14:59,444 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null0.1480599563361, length=380
2016-12-01 19:14:59,444 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:14:59,457 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null0.1480599563361
2016-12-01 19:14:59,458 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null0.1480599563361 after 1ms
2016-12-01 19:14:59,538 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/4ae4c5ba5fb97295e3d04f32627b110f/recovered.edits/0000000000000000007.temp region=4ae4c5ba5fb97295e3d04f32627b110f
2016-12-01 19:14:59,538 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:14:59,618 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/4ae4c5ba5fb97295e3d04f32627b110f/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/4ae4c5ba5fb97295e3d04f32627b110f/recovered.edits/0000000000000000007
2016-12-01 19:14:59,619 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null0.1480599563361, length=380, corrupted=false, progress failed=false
2016-12-01 19:14:59,636 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null0.1480599563361 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:14:59,636 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5012e8c5 in 216ms
2016-12-01 19:15:00,272 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.MetaDataRegionObserver from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:15:00,273 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:15:00,273 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:15:00,273 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:15:00,273 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SYSTEM.CATALOG successfully.
2016-12-01 19:15:00,282 INFO  [StoreOpener-c9bd4102bc0eee28e3ffb28ff04128af-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=16, currentSize=1462576, freeSize=1287027664, maxSize=1288490240, heapSize=1462576, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:00,282 INFO  [StoreOpener-c9bd4102bc0eee28e3ffb28ff04128af-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:00,303 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.CATALOG/c9bd4102bc0eee28e3ffb28ff04128af/recovered.edits/0000000000000000133
2016-12-01 19:15:00,377 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null9.1480599549930
2016-12-01 19:15:00,391 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined c9bd4102bc0eee28e3ffb28ff04128af; next sequenceid=134
2016-12-01 19:15:00,405 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null9.1480599549930, length=382
2016-12-01 19:15:00,405 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:00,424 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null9.1480599549930
2016-12-01 19:15:00,425 INFO  [PostOpenDeployTasks:c9bd4102bc0eee28e3ffb28ff04128af] regionserver.HRegionServer: Post open deploy tasks for SYSTEM.CATALOG,,1479977347096.c9bd4102bc0eee28e3ffb28ff04128af.
2016-12-01 19:15:00,425 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null9.1480599549930 after 1ms
2016-12-01 19:15:00,436 INFO  [PostOpenDeployTasks:c9bd4102bc0eee28e3ffb28ff04128af] hbase.MetaTableAccessor: Updated row SYSTEM.CATALOG,,1479977347096.c9bd4102bc0eee28e3ffb28ff04128af. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:00,531 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-1] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/3862bdfc3021330623e5302d2207998e/recovered.edits/0000000000000000142.temp region=3862bdfc3021330623e5302d2207998e
2016-12-01 19:15:00,535 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:00,610 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/3862bdfc3021330623e5302d2207998e/recovered.edits/0000000000000000142.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/3862bdfc3021330623e5302d2207998e/recovered.edits/0000000000000000142
2016-12-01 19:15:00,611 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null9.1480599549930, length=382, corrupted=false, progress failed=false
2016-12-01 19:15:00,628 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null9.1480599549930 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:00,629 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@3877fc69 in 251ms
2016-12-01 19:15:01,065 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null7.1480599549798
2016-12-01 19:15:01,095 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null7.1480599549798, length=380
2016-12-01 19:15:01,095 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:01,127 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null7.1480599549798
2016-12-01 19:15:01,128 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null7.1480599549798 after 1ms
2016-12-01 19:15:01,211 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/e7a359cdd8fa8f6bf55164aef866ec7b/recovered.edits/0000000000000000005.temp region=e7a359cdd8fa8f6bf55164aef866ec7b
2016-12-01 19:15:01,211 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:01,291 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/e7a359cdd8fa8f6bf55164aef866ec7b/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/e7a359cdd8fa8f6bf55164aef866ec7b/recovered.edits/0000000000000000005
2016-12-01 19:15:01,292 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null7.1480599549798, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:01,308 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null7.1480599549798 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:01,309 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@74f5bf4a in 243ms
2016-12-01 19:15:01,407 INFO  [Thread-285] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x2be166bc connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:15:01,407 INFO  [Thread-285] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x2be166bc0x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:15:01,409 INFO  [Thread-285-SendThread(hscale-dev1-dn4:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:15:01,410 INFO  [Thread-285-SendThread(hscale-dev1-dn4:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn4/10.60.70.14:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:15:01,411 INFO  [Thread-285-SendThread(hscale-dev1-dn4:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn4/10.60.70.14:2181, initiating session
2016-12-01 19:15:01,432 INFO  [Thread-285-SendThread(hscale-dev1-dn4:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn4/10.60.70.14:2181, sessionid = 0x458ba9a26600015, negotiated timeout = 120000
2016-12-01 19:15:01,444 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:14:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599899418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599899418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:14:59 IST 2016, RpcRetryingCaller{globalStartTime=1480599899418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:00 IST 2016, RpcRetryingCaller{globalStartTime=1480599899418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:01 IST 2016, RpcRetryingCaller{globalStartTime=1480599899418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:15:01,446 INFO  [Thread-285] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x600cc76e connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:15:01,446 INFO  [Thread-285] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x600cc76e0x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:15:01,447 INFO  [Thread-285-SendThread(hscale-dev1-dn1:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:15:01,448 INFO  [Thread-285-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn1/10.60.70.11:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:15:01,450 INFO  [Thread-285-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn1/10.60.70.11:2181, initiating session
2016-12-01 19:15:01,471 INFO  [Thread-285-SendThread(hscale-dev1-dn1:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn1/10.60.70.11:2181, sessionid = 0x158ba9a257c0019, negotiated timeout = 120000
2016-12-01 19:15:01,487 WARN  [Thread-285] hbase.HBaseConfiguration: Config option "hbase.regionserver.lease.period" is deprecated. Instead, use "hbase.client.scanner.timeout.period"
2016-12-01 19:15:01,531 INFO  [Thread-285] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x458ba9a26600015
2016-12-01 19:15:01,549 INFO  [Thread-285] zookeeper.ZooKeeper: Session: 0x458ba9a26600015 closed
2016-12-01 19:15:01,549 INFO  [Thread-285-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-01 19:15:01,550 INFO  [Thread-285] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x158ba9a257c0019
2016-12-01 19:15:01,564 INFO  [Thread-285] zookeeper.ZooKeeper: Session: 0x158ba9a257c0019 closed
2016-12-01 19:15:01,564 INFO  [Thread-285-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-01 19:15:01,619 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null1.1480599548922
2016-12-01 19:15:01,645 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null1.1480599548922, length=404
2016-12-01 19:15:01,645 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:01,665 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null1.1480599548922
2016-12-01 19:15:01,667 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null1.1480599548922 after 2ms
2016-12-01 19:15:01,749 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/3de8ae6766ac73a2f1418a9c4859cd10/recovered.edits/0000000000000000143.temp region=3de8ae6766ac73a2f1418a9c4859cd10
2016-12-01 19:15:01,749 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:01,829 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/3de8ae6766ac73a2f1418a9c4859cd10/recovered.edits/0000000000000000143.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/3de8ae6766ac73a2f1418a9c4859cd10/recovered.edits/0000000000000000143
2016-12-01 19:15:01,829 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null1.1480599548922, length=404, corrupted=false, progress failed=false
2016-12-01 19:15:01,845 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null1.1480599548922 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:01,845 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5fa75d68 in 226ms
2016-12-01 19:15:02,561 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null14.1480599550345
2016-12-01 19:15:02,585 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null14.1480599550345, length=588
2016-12-01 19:15:02,585 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:02,600 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null14.1480599550345
2016-12-01 19:15:02,601 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null14.1480599550345 after 1ms
2016-12-01 19:15:02,676 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-0] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000007.temp region=a5ba95da3316cedaf5c5175e83c8b1bb
2016-12-01 19:15:02,676 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:02,730 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a5ba95da3316cedaf5c5175e83c8b1bb/recovered.edits/0000000000000000008
2016-12-01 19:15:02,731 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 2 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null14.1480599550345, length=588, corrupted=false, progress failed=false
2016-12-01 19:15:02,762 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null14.1480599550345 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:02,762 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@6222c36 in 201ms
2016-12-01 19:15:03,224 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null15.1480599550413
2016-12-01 19:15:03,240 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null15.1480599550413, length=91
2016-12-01 19:15:03,240 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:03,255 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null15.1480599550413
2016-12-01 19:15:03,256 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null15.1480599550413 after 1ms
2016-12-01 19:15:03,294 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:03,295 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn1,16020,1480599544698-splitting/hscale-dev1-dn1%2C16020%2C1480599544698.null15.1480599550413, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:03,310 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn1%2C16020%2C1480599544698-splitting%2Fhscale-dev1-dn1%252C16020%252C1480599544698.null15.1480599550413 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:03,310 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7a033a9 in 86ms
2016-12-01 19:15:03,469 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Open PATIENT,2,1479977629367.3862bdfc3021330623e5302d2207998e.
2016-12-01 19:15:03,475 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Open ENCOUNTER,2,1479977632429.34f420bd6a9894c079d9838b0e0ffe79.
2016-12-01 19:15:03,494 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b.
2016-12-01 19:15:03,494 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:03,494 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:03,514 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:03,518 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PATIENT successfully.
2016-12-01 19:15:03,518 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:03,518 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PATIENT successfully.
2016-12-01 19:15:03,518 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:03,518 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:03,525 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:03,525 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:03,526 INFO  [StoreOpener-3862bdfc3021330623e5302d2207998e-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:03,526 INFO  [StoreOpener-3862bdfc3021330623e5302d2207998e-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:03,544 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:03,547 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:03,547 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:03,549 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of ENCOUNTER successfully.
2016-12-01 19:15:03,549 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:03,549 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of ENCOUNTER successfully.
2016-12-01 19:15:03,549 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:03,549 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:03,551 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/3862bdfc3021330623e5302d2207998e/recovered.edits/0000000000000000142
2016-12-01 19:15:03,556 INFO  [StoreOpener-34f420bd6a9894c079d9838b0e0ffe79-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:03,556 INFO  [StoreOpener-34f420bd6a9894c079d9838b0e0ffe79-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:03,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:03,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:03,565 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:03,565 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:03,565 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:03,565 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:03,565 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:03,568 INFO  [StoreOpener-e7a359cdd8fa8f6bf55164aef866ec7b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:03,569 INFO  [StoreOpener-e7a359cdd8fa8f6bf55164aef866ec7b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:03,571 INFO  [StoreOpener-e7a359cdd8fa8f6bf55164aef866ec7b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:03,572 INFO  [StoreOpener-e7a359cdd8fa8f6bf55164aef866ec7b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:03,579 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/34f420bd6a9894c079d9838b0e0ffe79/recovered.edits/0000000000000000160
2016-12-01 19:15:03,579 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/e7a359cdd8fa8f6bf55164aef866ec7b/recovered.edits/0000000000000000005
2016-12-01 19:15:03,648 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 3862bdfc3021330623e5302d2207998e; next sequenceid=143
2016-12-01 19:15:03,648 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:03,654 INFO  [PostOpenDeployTasks:3862bdfc3021330623e5302d2207998e] regionserver.HRegionServer: Post open deploy tasks for PATIENT,2,1479977629367.3862bdfc3021330623e5302d2207998e.
2016-12-01 19:15:03,665 INFO  [PostOpenDeployTasks:3862bdfc3021330623e5302d2207998e] hbase.MetaTableAccessor: Updated row PATIENT,2,1479977629367.3862bdfc3021330623e5302d2207998e. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:03,674 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined e7a359cdd8fa8f6bf55164aef866ec7b; next sequenceid=6
2016-12-01 19:15:03,685 INFO  [PostOpenDeployTasks:e7a359cdd8fa8f6bf55164aef866ec7b] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b.
2016-12-01 19:15:03,686 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 34f420bd6a9894c079d9838b0e0ffe79; next sequenceid=161
2016-12-01 19:15:03,686 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:03,694 INFO  [PostOpenDeployTasks:e7a359cdd8fa8f6bf55164aef866ec7b] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:03,707 INFO  [PostOpenDeployTasks:34f420bd6a9894c079d9838b0e0ffe79] regionserver.HRegionServer: Post open deploy tasks for ENCOUNTER,2,1479977632429.34f420bd6a9894c079d9838b0e0ffe79.
2016-12-01 19:15:03,718 INFO  [PostOpenDeployTasks:34f420bd6a9894c079d9838b0e0ffe79] hbase.MetaTableAccessor: Updated row ENCOUNTER,2,1479977632429.34f420bd6a9894c079d9838b0e0ffe79. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:03,849 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null2.1480599563777
2016-12-01 19:15:03,867 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null2.1480599563777, length=91
2016-12-01 19:15:03,867 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:03,886 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null2.1480599563777
2016-12-01 19:15:03,887 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null2.1480599563777 after 1ms
2016-12-01 19:15:03,937 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:03,937 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599558513-splitting/hscale-dev1-dn3%2C16020%2C1480599558513.null2.1480599563777, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:03,962 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599558513-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599558513.null2.1480599563777 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:03,962 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1f5af5ac in 113ms
2016-12-01 19:15:04,077 INFO  [PriorityRpcServer.handler=19,queue=1,port=16020] regionserver.RSRpcServices: Open PATIENT,3,1479977629367.1e04655659c5902dd127923cf1a58e61.
2016-12-01 19:15:04,107 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:04,107 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:04,124 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:04,125 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:04,125 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:04,125 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:04,125 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:04,125 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:04,126 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:04,136 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PATIENT successfully.
2016-12-01 19:15:04,136 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:04,136 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PATIENT successfully.
2016-12-01 19:15:04,136 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:04,136 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:04,143 INFO  [StoreOpener-1e04655659c5902dd127923cf1a58e61-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:04,143 INFO  [StoreOpener-1e04655659c5902dd127923cf1a58e61-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:04,162 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/1e04655659c5902dd127923cf1a58e61/recovered.edits/0000000000000000145
2016-12-01 19:15:04,231 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 1e04655659c5902dd127923cf1a58e61; next sequenceid=146
2016-12-01 19:15:04,232 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:04,238 INFO  [PostOpenDeployTasks:1e04655659c5902dd127923cf1a58e61] regionserver.HRegionServer: Post open deploy tasks for PATIENT,3,1479977629367.1e04655659c5902dd127923cf1a58e61.
2016-12-01 19:15:04,245 INFO  [PostOpenDeployTasks:1e04655659c5902dd127923cf1a58e61] hbase.MetaTableAccessor: Updated row PATIENT,3,1479977629367.1e04655659c5902dd127923cf1a58e61. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:04,423 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null3.1480599552327
2016-12-01 19:15:04,446 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null3.1480599552327, length=380
2016-12-01 19:15:04,446 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:04,460 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null3.1480599552327
2016-12-01 19:15:04,461 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null3.1480599552327 after 1ms
2016-12-01 19:15:04,537 INFO  [main-EventThread] replication.ReplicationTrackerZKImpl: /hbase-secure/rs/hscale-dev1-dn4,16020,1480599845544 znode expired, triggering replicatorRemoved event
2016-12-01 19:15:04,580 INFO  [main-EventThread] replication.ReplicationTrackerZKImpl: /hbase-secure/rs/hscale-dev1-dn2,16020,1480599823976 znode expired, triggering replicatorRemoved event
2016-12-01 19:15:04,603 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/067a3d14297c94e8708fe56780f1443b/recovered.edits/0000000000000000005.temp region=067a3d14297c94e8708fe56780f1443b
2016-12-01 19:15:04,603 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:04,676 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/067a3d14297c94e8708fe56780f1443b/recovered.edits/0000000000000000005.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/067a3d14297c94e8708fe56780f1443b/recovered.edits/0000000000000000005
2016-12-01 19:15:04,676 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null3.1480599552327, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:04,703 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null3.1480599552327 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:04,703 INFO  [main-EventThread] replication.ReplicationTrackerZKImpl: /hbase-secure/rs/hscale-dev1-dn3,16020,1480599826952 znode expired, triggering replicatorRemoved event
2016-12-01 19:15:04,703 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@6e214747 in 280ms
2016-12-01 19:15:05,247 WARN  [pool-10-thread-1] client.HBaseAdmin: failed to get the procedure result procId=7
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:15:03 IST 2016, RpcRetryingCaller{globalStartTime=1480599903214, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:03 IST 2016, RpcRetryingCaller{globalStartTime=1480599903214, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:03 IST 2016, RpcRetryingCaller{globalStartTime=1480599903214, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599903214, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:05 IST 2016, RpcRetryingCaller{globalStartTime=1480599903214, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.access$700(HBaseAdmin.java:194)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.getProcedureResult(HBaseAdmin.java:4383)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.waitProcedureResult(HBaseAdmin.java:4335)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture.get(HBaseAdmin.java:4291)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:647)
    at org.apache.hadoop.hbase.client.HBaseAdmin.createTable(HBaseAdmin.java:577)
    at com.splicemachine.lifecycle.RegionServerLifecycle.distributedStart(RegionServerLifecycle.java:66)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:81)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getProcedureResult(MasterRpcServices.java:1023)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55469)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getProcedureResult(MasterProtos.java:58728)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getProcedureResult(ConnectionManager.java:1951)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture$2.call(HBaseAdmin.java:4387)
    at org.apache.hadoop.hbase.client.HBaseAdmin$ProcedureFuture$2.call(HBaseAdmin.java:4384)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:15:05,418 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null9.1480599552877
2016-12-01 19:15:05,440 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null9.1480599552877, length=370
2016-12-01 19:15:05,441 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:05,456 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null9.1480599552877
2016-12-01 19:15:05,458 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null9.1480599552877 after 1ms
2016-12-01 19:15:05,500 WARN  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Unable to connect to the master to check the last flushed sequence id
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=34, waitTime=5
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.RegionServerStatusProtos$RegionServerStatusService$BlockingStub.getLastFlushedSequenceId(RegionServerStatusProtos.java:9018)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.getLastSequenceId(HRegionServer.java:2303)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:338)
    at org.apache.hadoop.hbase.wal.WALSplitter.splitLogFile(WALSplitter.java:235)
    at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:104)
    at org.apache.hadoop.hbase.regionserver.handler.WALSplitterHandler.process(WALSplitterHandler.java:72)
    at org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=34, waitTime=5
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 11 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=34, waitTime=5
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:15:05,523 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-1] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/e5c0350ed1099979ad85330cdeded026/recovered.edits/0000000000000000208.temp region=e5c0350ed1099979ad85330cdeded026
2016-12-01 19:15:05,523 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:05,594 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/e5c0350ed1099979ad85330cdeded026/recovered.edits/0000000000000000208.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/e5c0350ed1099979ad85330cdeded026/recovered.edits/0000000000000000208
2016-12-01 19:15:05,595 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599548091-splitting/hscale-dev1-dn4%2C16020%2C1480599548091.null9.1480599552877, length=370, corrupted=false, progress failed=false
2016-12-01 19:15:05,625 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599548091-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599548091.null9.1480599552877 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:05,625 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@21103fbc in 207ms
2016-12-01 19:15:05,629 INFO  [PriorityRpcServer.handler=15,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311.
2016-12-01 19:15:05,630 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] regionserver.RSRpcServices: Open ENCOUNTER,1,1479977632429.17022d5d42890169454bf30a0203da51.
2016-12-01 19:15:05,636 INFO  [PriorityRpcServer.handler=13,queue=1,port=16020] regionserver.RSRpcServices: Open FMD,3,1479977442279.3de8ae6766ac73a2f1418a9c4859cd10.
2016-12-01 19:15:05,642 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689.
2016-12-01 19:15:05,643 INFO  [PriorityRpcServer.handler=14,queue=0,port=16020] regionserver.RSRpcServices: Open SS_MSG,,1479977389988.9843b6dc3b86f25d11d823dc0e9c1fc7.
2016-12-01 19:15:05,645 INFO  [PriorityRpcServer.handler=18,queue=0,port=16020] regionserver.RSRpcServices: Open FMD,1,1479977442279.6adc2b33f62f4c61e99b85dff151f1d5.
2016-12-01 19:15:05,649 INFO  [PriorityRpcServer.handler=16,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_CONGLOMERATE,,1480593683358.9955cd48fd8835f8c1479dc3df73c74c.
2016-12-01 19:15:05,649 INFO  [PriorityRpcServer.handler=4,queue=0,port=16020] regionserver.RSRpcServices: Open PROCEDURE,1,1479977635472.4083ad26cade9cdb19a51d30c06ae2e7.
2016-12-01 19:15:05,650 INFO  [PriorityRpcServer.handler=0,queue=0,port=16020] regionserver.RSRpcServices: Open SYSTEM.SEQUENCE,,1479977355863.6e9c346369df794e52df35eaba4610b8.
2016-12-01 19:15:05,650 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0D,1480593676447.2e2a549bac705183ac6c9a4857fb0bfe.
2016-12-01 19:15:05,664 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,664 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,664 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,664 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,683 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,684 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,693 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,698 INFO  [StoreOpener-e408a4ef03608ae6738cde8286584311-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,698 INFO  [StoreOpener-e408a4ef03608ae6738cde8286584311-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,699 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of ENCOUNTER successfully.
2016-12-01 19:15:05,699 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:05,699 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of ENCOUNTER successfully.
2016-12-01 19:15:05,699 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:05,699 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:05,701 INFO  [StoreOpener-e408a4ef03608ae6738cde8286584311-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,701 INFO  [StoreOpener-e408a4ef03608ae6738cde8286584311-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,704 INFO  [StoreOpener-17022d5d42890169454bf30a0203da51-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,704 INFO  [StoreOpener-17022d5d42890169454bf30a0203da51-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,706 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,707 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/e408a4ef03608ae6738cde8286584311/recovered.edits/0000000000000000005
2016-12-01 19:15:05,709 INFO  [StoreOpener-6a3106089ce462b563f88da133dba689-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,709 INFO  [StoreOpener-6a3106089ce462b563f88da133dba689-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,712 INFO  [StoreOpener-6a3106089ce462b563f88da133dba689-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,712 INFO  [StoreOpener-6a3106089ce462b563f88da133dba689-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,718 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/6a3106089ce462b563f88da133dba689/recovered.edits/0000000000000000005
2016-12-01 19:15:05,729 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/17022d5d42890169454bf30a0203da51/recovered.edits/0000000000000000178
2016-12-01 19:15:05,729 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e.
2016-12-01 19:15:05,733 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open PATIENT,1,1479977629367.e4149c66b824f54afdb16348fb0aab6b.
2016-12-01 19:15:05,738 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open FMD,2,1479977442279.e5c0350ed1099979ad85330cdeded026.
2016-12-01 19:15:05,743 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open ORG,,1479977363087.cfcab5f8d1e2f11a21c71478e08205c6.
2016-12-01 19:15:05,747 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open hbase:namespace,,1475255736526.e78851aa341a5a07579e125059b65cab.
2016-12-01 19:15:05,751 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open SYSTEM.FUNCTION,,1479977360587.74957f0a078e8febe4e1a4a17d749db7.
2016-12-01 19:15:05,754 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open PROCEDURE,,1479977635472.99a13a250748cddebf50a0a937a3144a.
2016-12-01 19:15:05,759 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open PROCEDURE,2,1479977635472.eb5cf006e72f3e59c033f8023a559abb.
2016-12-01 19:15:05,760 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa.
2016-12-01 19:15:05,763 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495.
2016-12-01 19:15:05,764 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52.
2016-12-01 19:15:05,765 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0E,1480593676447.067a3d14297c94e8708fe56780f1443b.
2016-12-01 19:15:05,766 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4.
2016-12-01 19:15:05,811 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined e408a4ef03608ae6738cde8286584311; next sequenceid=6
2016-12-01 19:15:05,819 INFO  [PostOpenDeployTasks:e408a4ef03608ae6738cde8286584311] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311.
2016-12-01 19:15:05,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 6a3106089ce462b563f88da133dba689; next sequenceid=6
2016-12-01 19:15:05,827 INFO  [PostOpenDeployTasks:e408a4ef03608ae6738cde8286584311] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:05,830 INFO  [PostOpenDeployTasks:6a3106089ce462b563f88da133dba689] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689.
2016-12-01 19:15:05,837 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 17022d5d42890169454bf30a0203da51; next sequenceid=179
2016-12-01 19:15:05,837 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:05,839 INFO  [PostOpenDeployTasks:6a3106089ce462b563f88da133dba689] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:05,850 INFO  [PostOpenDeployTasks:17022d5d42890169454bf30a0203da51] regionserver.HRegionServer: Post open deploy tasks for ENCOUNTER,1,1479977632429.17022d5d42890169454bf30a0203da51.
2016-12-01 19:15:05,856 INFO  [PostOpenDeployTasks:17022d5d42890169454bf30a0203da51] hbase.MetaTableAccessor: Updated row ENCOUNTER,1,1479977632429.17022d5d42890169454bf30a0203da51. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:05,908 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,909 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,927 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,927 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,928 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,928 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,929 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,929 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,930 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,934 INFO  [StoreOpener-2e2a549bac705183ac6c9a4857fb0bfe-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,934 INFO  [StoreOpener-2e2a549bac705183ac6c9a4857fb0bfe-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,936 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,937 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,941 INFO  [StoreOpener-2e2a549bac705183ac6c9a4857fb0bfe-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,941 INFO  [StoreOpener-2e2a549bac705183ac6c9a4857fb0bfe-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,951 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/2e2a549bac705183ac6c9a4857fb0bfe/recovered.edits/0000000000000000005
2016-12-01 19:15:05,951 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:05,951 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,956 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,959 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of SS_MSG successfully.
2016-12-01 19:15:05,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SS_MSG successfully.
2016-12-01 19:15:05,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SS_MSG successfully.
2016-12-01 19:15:05,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SS_MSG successfully.
2016-12-01 19:15:05,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SS_MSG successfully.
2016-12-01 19:15:05,966 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,966 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,968 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:05,968 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:05,968 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:05,969 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:05,969 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:05,969 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:05,969 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:05,969 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,969 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,974 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,974 INFO  [StoreOpener-9843b6dc3b86f25d11d823dc0e9c1fc7-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,975 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of FMD successfully.
2016-12-01 19:15:05,975 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:05,975 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of FMD successfully.
2016-12-01 19:15:05,975 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:05,975 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:05,981 INFO  [StoreOpener-3de8ae6766ac73a2f1418a9c4859cd10-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:05,981 INFO  [StoreOpener-3de8ae6766ac73a2f1418a9c4859cd10-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:05,982 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null13.1480599884142
2016-12-01 19:15:06,001 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SS_MSG/9843b6dc3b86f25d11d823dc0e9c1fc7/recovered.edits/0000000000000000110
2016-12-01 19:15:06,003 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null13.1480599884142, length=91
2016-12-01 19:15:06,003 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:06,016 INFO  [StoreFileOpenerThread-CF1-1] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for ef6f5dc66d474ec8b108db010d759d6c
2016-12-01 19:15:06,024 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null13.1480599884142
2016-12-01 19:15:06,025 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/3de8ae6766ac73a2f1418a9c4859cd10/recovered.edits/0000000000000000143
2016-12-01 19:15:06,025 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null13.1480599884142 after 1ms
2016-12-01 19:15:06,026 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 2e2a549bac705183ac6c9a4857fb0bfe; next sequenceid=6
2016-12-01 19:15:06,034 INFO  [PostOpenDeployTasks:2e2a549bac705183ac6c9a4857fb0bfe] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0D,1480593676447.2e2a549bac705183ac6c9a4857fb0bfe.
2016-12-01 19:15:06,043 INFO  [PostOpenDeployTasks:2e2a549bac705183ac6c9a4857fb0bfe] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0D,1480593676447.2e2a549bac705183ac6c9a4857fb0bfe. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,102 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,102 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,116 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 9843b6dc3b86f25d11d823dc0e9c1fc7; next sequenceid=111
2016-12-01 19:15:06,116 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,123 INFO  [PostOpenDeployTasks:9843b6dc3b86f25d11d823dc0e9c1fc7] regionserver.HRegionServer: Post open deploy tasks for SS_MSG,,1479977389988.9843b6dc3b86f25d11d823dc0e9c1fc7.
2016-12-01 19:15:06,125 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:06,126 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null13.1480599884142, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:06,129 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 3de8ae6766ac73a2f1418a9c4859cd10; next sequenceid=144
2016-12-01 19:15:06,130 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,132 INFO  [PostOpenDeployTasks:9843b6dc3b86f25d11d823dc0e9c1fc7] hbase.MetaTableAccessor: Updated row SS_MSG,,1479977389988.9843b6dc3b86f25d11d823dc0e9c1fc7. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,133 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,134 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,141 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null13.1480599884142 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,141 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@c5e89e1 in 158ms
2016-12-01 19:15:06,142 INFO  [PostOpenDeployTasks:3de8ae6766ac73a2f1418a9c4859cd10] regionserver.HRegionServer: Post open deploy tasks for FMD,3,1479977442279.3de8ae6766ac73a2f1418a9c4859cd10.
2016-12-01 19:15:06,145 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of FMD successfully.
2016-12-01 19:15:06,145 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,145 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of FMD successfully.
2016-12-01 19:15:06,145 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,145 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,149 INFO  [PostOpenDeployTasks:3de8ae6766ac73a2f1418a9c4859cd10] hbase.MetaTableAccessor: Updated row FMD,3,1479977442279.3de8ae6766ac73a2f1418a9c4859cd10. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,151 INFO  [StoreOpener-6adc2b33f62f4c61e99b85dff151f1d5-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,151 INFO  [StoreOpener-6adc2b33f62f4c61e99b85dff151f1d5-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,189 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/6adc2b33f62f4c61e99b85dff151f1d5/recovered.edits/0000000000000000177
2016-12-01 19:15:06,217 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,217 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,235 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,235 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,242 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f.
2016-12-01 19:15:06,242 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,242 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,243 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,243 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,243 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,243 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,243 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,247 INFO  [PriorityRpcServer.handler=9,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0B,1480593676447.ca8320df34149b34da1b6a0aaa668b5a.
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,259 INFO  [StoreOpener-9955cd48fd8835f8c1479dc3df73c74c-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,259 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.SequenceRegionObserver from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,259 INFO  [StoreOpener-9955cd48fd8835f8c1479dc3df73c74c-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,259 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,259 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,259 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,259 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SYSTEM.SEQUENCE successfully.
2016-12-01 19:15:06,265 INFO  [StoreOpener-6e9c346369df794e52df35eaba4610b8-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,265 INFO  [StoreOpener-6e9c346369df794e52df35eaba4610b8-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,266 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_CONGLOMERATE/9955cd48fd8835f8c1479dc3df73c74c/recovered.edits/0000000000000000005
2016-12-01 19:15:06,272 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.SEQUENCE/6e9c346369df794e52df35eaba4610b8/recovered.edits/0000000000000000113
2016-12-01 19:15:06,277 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 6adc2b33f62f4c61e99b85dff151f1d5; next sequenceid=178
2016-12-01 19:15:06,277 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,283 INFO  [PostOpenDeployTasks:6adc2b33f62f4c61e99b85dff151f1d5] regionserver.HRegionServer: Post open deploy tasks for FMD,1,1479977442279.6adc2b33f62f4c61e99b85dff151f1d5.
2016-12-01 19:15:06,288 INFO  [PostOpenDeployTasks:6adc2b33f62f4c61e99b85dff151f1d5] hbase.MetaTableAccessor: Updated row FMD,1,1479977442279.6adc2b33f62f4c61e99b85dff151f1d5. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,331 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,331 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,358 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,358 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,358 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,359 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,359 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,359 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,359 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,367 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 9955cd48fd8835f8c1479dc3df73c74c; next sequenceid=6
2016-12-01 19:15:06,373 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PROCEDURE successfully.
2016-12-01 19:15:06,373 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:06,373 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PROCEDURE successfully.
2016-12-01 19:15:06,374 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:06,374 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:06,374 INFO  [PostOpenDeployTasks:9955cd48fd8835f8c1479dc3df73c74c] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_CONGLOMERATE,,1480593683358.9955cd48fd8835f8c1479dc3df73c74c.
2016-12-01 19:15:06,379 INFO  [PostOpenDeployTasks:9955cd48fd8835f8c1479dc3df73c74c] hbase.MetaTableAccessor: Updated row splice:SPLICE_CONGLOMERATE,,1480593683358.9955cd48fd8835f8c1479dc3df73c74c. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,380 INFO  [StoreOpener-4083ad26cade9cdb19a51d30c06ae2e7-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,380 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 6e9c346369df794e52df35eaba4610b8; next sequenceid=114
2016-12-01 19:15:06,380 INFO  [StoreOpener-4083ad26cade9cdb19a51d30c06ae2e7-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,380 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,399 INFO  [PostOpenDeployTasks:6e9c346369df794e52df35eaba4610b8] regionserver.HRegionServer: Post open deploy tasks for SYSTEM.SEQUENCE,,1479977355863.6e9c346369df794e52df35eaba4610b8.
2016-12-01 19:15:06,404 INFO  [PostOpenDeployTasks:6e9c346369df794e52df35eaba4610b8] hbase.MetaTableAccessor: Updated row SYSTEM.SEQUENCE,,1479977355863.6e9c346369df794e52df35eaba4610b8. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,405 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/4083ad26cade9cdb19a51d30c06ae2e7/recovered.edits/0000000000000000119
2016-12-01 19:15:06,420 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,420 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,443 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:15:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599904418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599904418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599904418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:05 IST 2016, RpcRetryingCaller{globalStartTime=1480599904418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:15:06 IST 2016, RpcRetryingCaller{globalStartTime=1480599904418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)


    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
2016-12-01 19:15:06,450 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,451 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,457 INFO  [StoreOpener-41e9e82ad28787febb776a2cd511592e-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,458 INFO  [StoreOpener-41e9e82ad28787febb776a2cd511592e-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,470 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,470 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,489 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,490 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,496 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PATIENT successfully.
2016-12-01 19:15:06,496 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:06,496 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PATIENT successfully.
2016-12-01 19:15:06,496 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:06,496 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:06,500 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 4083ad26cade9cdb19a51d30c06ae2e7; next sequenceid=120
2016-12-01 19:15:06,500 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,502 INFO  [StoreOpener-e4149c66b824f54afdb16348fb0aab6b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,502 INFO  [StoreOpener-e4149c66b824f54afdb16348fb0aab6b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,505 INFO  [PostOpenDeployTasks:4083ad26cade9cdb19a51d30c06ae2e7] regionserver.HRegionServer: Post open deploy tasks for PROCEDURE,1,1479977635472.4083ad26cade9cdb19a51d30c06ae2e7.
2016-12-01 19:15:06,516 INFO  [PostOpenDeployTasks:4083ad26cade9cdb19a51d30c06ae2e7] hbase.MetaTableAccessor: Updated row PROCEDURE,1,1479977635472.4083ad26cade9cdb19a51d30c06ae2e7. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,517 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/e4149c66b824f54afdb16348fb0aab6b/recovered.edits/0000000000000000131
2016-12-01 19:15:06,528 INFO  [StoreFileOpenerThread-l-1] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for def81bab30174253b689a5ff1ca65514
2016-12-01 19:15:06,534 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/acl/41e9e82ad28787febb776a2cd511592e/recovered.edits/0000000000000001175
2016-12-01 19:15:06,540 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Started memstore flush for hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e., current region memstore size 168 B, and 1/1 column families' memstores are being flushed.; wal is null, using passed sequenceid=1175
2016-12-01 19:15:06,561 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,561 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,581 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,582 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,594 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of FMD successfully.
2016-12-01 19:15:06,594 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,594 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of FMD successfully.
2016-12-01 19:15:06,594 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,594 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of FMD successfully.
2016-12-01 19:15:06,600 INFO  [StoreOpener-e5c0350ed1099979ad85330cdeded026-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,600 INFO  [StoreOpener-e5c0350ed1099979ad85330cdeded026-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,614 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/FMD/e5c0350ed1099979ad85330cdeded026/recovered.edits/0000000000000000208
2016-12-01 19:15:06,632 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined e4149c66b824f54afdb16348fb0aab6b; next sequenceid=132
2016-12-01 19:15:06,633 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,641 INFO  [PostOpenDeployTasks:e4149c66b824f54afdb16348fb0aab6b] regionserver.HRegionServer: Post open deploy tasks for PATIENT,1,1479977629367.e4149c66b824f54afdb16348fb0aab6b.
2016-12-01 19:15:06,651 INFO  [PostOpenDeployTasks:e4149c66b824f54afdb16348fb0aab6b] hbase.MetaTableAccessor: Updated row PATIENT,1,1479977629367.e4149c66b824f54afdb16348fb0aab6b. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,658 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1175, memsize=168, hasBloomFilter=false, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/acl/41e9e82ad28787febb776a2cd511592e/.tmp/9dfb10a028074e8eb06bc35a7f4515c1
2016-12-01 19:15:06,703 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/acl/41e9e82ad28787febb776a2cd511592e/l/9dfb10a028074e8eb06bc35a7f4515c1, entries=1, sequenceid=1175, filesize=4.6 K
2016-12-01 19:15:06,704 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Finished memstore flush of ~168 B/168, currentsize=0 B/0 for region hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e. in 164ms, sequenceid=1175, compaction requested=true; wal=null
2016-12-01 19:15:06,709 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,709 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,710 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined e5c0350ed1099979ad85330cdeded026; next sequenceid=209
2016-12-01 19:15:06,710 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,722 INFO  [PostOpenDeployTasks:e5c0350ed1099979ad85330cdeded026] regionserver.HRegionServer: Post open deploy tasks for FMD,2,1479977442279.e5c0350ed1099979ad85330cdeded026.
2016-12-01 19:15:06,727 INFO  [PostOpenDeployTasks:e5c0350ed1099979ad85330cdeded026] hbase.MetaTableAccessor: Updated row FMD,2,1479977442279.e5c0350ed1099979ad85330cdeded026. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,739 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,739 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,739 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,739 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,740 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,740 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,740 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of ORG successfully.
2016-12-01 19:15:06,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of ORG successfully.
2016-12-01 19:15:06,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of ORG successfully.
2016-12-01 19:15:06,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of ORG successfully.
2016-12-01 19:15:06,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of ORG successfully.
2016-12-01 19:15:06,765 INFO  [StoreOpener-cfcab5f8d1e2f11a21c71478e08205c6-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,765 INFO  [StoreOpener-cfcab5f8d1e2f11a21c71478e08205c6-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,766 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,766 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,780 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ORG/cfcab5f8d1e2f11a21c71478e08205c6/recovered.edits/0000000000000000108
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,789 INFO  [StoreOpener-e78851aa341a5a07579e125059b65cab-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=18, currentSize=1475144, freeSize=1287015096, maxSize=1288490240, heapSize=1475144, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,789 INFO  [StoreOpener-e78851aa341a5a07579e125059b65cab-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,820 INFO  [StoreFileOpenerThread-info-1] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for f522c74e659149a6b31a0340cff681c9
2016-12-01 19:15:06,821 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 41e9e82ad28787febb776a2cd511592e; next sequenceid=1176
2016-12-01 19:15:06,833 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/namespace/e78851aa341a5a07579e125059b65cab/recovered.edits/0000000000000000159
2016-12-01 19:15:06,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined cfcab5f8d1e2f11a21c71478e08205c6; next sequenceid=109
2016-12-01 19:15:06,868 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:06,874 INFO  [PostOpenDeployTasks:cfcab5f8d1e2f11a21c71478e08205c6] regionserver.HRegionServer: Post open deploy tasks for ORG,,1479977363087.cfcab5f8d1e2f11a21c71478e08205c6.
2016-12-01 19:15:06,880 INFO  [PostOpenDeployTasks:cfcab5f8d1e2f11a21c71478e08205c6] hbase.MetaTableAccessor: Updated row ORG,,1479977363087.cfcab5f8d1e2f11a21c71478e08205c6. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,928 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined e78851aa341a5a07579e125059b65cab; next sequenceid=160
2016-12-01 19:15:06,941 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:06,941 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:06,953 INFO  [PostOpenDeployTasks:e78851aa341a5a07579e125059b65cab] regionserver.HRegionServer: Post open deploy tasks for hbase:namespace,,1475255736526.e78851aa341a5a07579e125059b65cab.
2016-12-01 19:15:06,959 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:06,959 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:06,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:06,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:06,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:06,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:06,960 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:06,962 INFO  [PostOpenDeployTasks:e78851aa341a5a07579e125059b65cab] hbase.MetaTableAccessor: Updated row hbase:namespace,,1475255736526.e78851aa341a5a07579e125059b65cab. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:06,974 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,974 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.MetaDataEndpointImpl: Starting Tracing-Metrics Systems
2016-12-01 19:15:06,975 WARN  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] metrics.Metrics: Phoenix metrics2/tracing sink was not started. Should be it be?
2016-12-01 19:15:06,976 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.MetaDataEndpointImpl from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,976 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,976 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,976 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,976 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SYSTEM.FUNCTION successfully.
2016-12-01 19:15:06,982 INFO  [StoreOpener-74957f0a078e8febe4e1a4a17d749db7-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=23, currentSize=1480008, freeSize=1287010232, maxSize=1288490240, heapSize=1480008, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:06,982 INFO  [StoreOpener-74957f0a078e8febe4e1a4a17d749db7-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:06,989 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/SYSTEM.FUNCTION/74957f0a078e8febe4e1a4a17d749db7/recovered.edits/0000000000000000099
2016-12-01 19:15:07,011 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null15.1480599884369
2016-12-01 19:15:07,029 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null15.1480599884369, length=91
2016-12-01 19:15:07,029 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:07,037 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,037 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,064 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null15.1480599884369
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,064 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,065 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null15.1480599884369 after 1ms
2016-12-01 19:15:07,069 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,069 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,069 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,069 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,069 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,075 INFO  [StoreOpener-99a13a250748cddebf50a0a937a3144a-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=23, currentSize=1480008, freeSize=1287010232, maxSize=1288490240, heapSize=1480008, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,075 INFO  [StoreOpener-99a13a250748cddebf50a0a937a3144a-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,081 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/99a13a250748cddebf50a0a937a3144a/recovered.edits/0000000000000000105
2016-12-01 19:15:07,088 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 74957f0a078e8febe4e1a4a17d749db7; next sequenceid=100
2016-12-01 19:15:07,088 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:07,096 INFO  [PostOpenDeployTasks:74957f0a078e8febe4e1a4a17d749db7] regionserver.HRegionServer: Post open deploy tasks for SYSTEM.FUNCTION,,1479977360587.74957f0a078e8febe4e1a4a17d749db7.
2016-12-01 19:15:07,105 INFO  [PostOpenDeployTasks:74957f0a078e8febe4e1a4a17d749db7] hbase.MetaTableAccessor: Updated row SYSTEM.FUNCTION,,1479977360587.74957f0a078e8febe4e1a4a17d749db7. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,157 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 99a13a250748cddebf50a0a937a3144a; next sequenceid=106
2016-12-01 19:15:07,157 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:07,159 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,159 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,171 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,172 INFO  [PostOpenDeployTasks:99a13a250748cddebf50a0a937a3144a] regionserver.HRegionServer: Post open deploy tasks for PROCEDURE,,1479977635472.99a13a250748cddebf50a0a937a3144a.
2016-12-01 19:15:07,176 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,176 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,176 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,176 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,176 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:07,177 INFO  [PostOpenDeployTasks:99a13a250748cddebf50a0a937a3144a] hbase.MetaTableAccessor: Updated row PROCEDURE,,1479977635472.99a13a250748cddebf50a0a937a3144a. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,183 INFO  [StoreOpener-eb5cf006e72f3e59c033f8023a559abb-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=23, currentSize=1480008, freeSize=1287010232, maxSize=1288490240, heapSize=1480008, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,183 INFO  [StoreOpener-eb5cf006e72f3e59c033f8023a559abb-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,204 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/eb5cf006e72f3e59c033f8023a559abb/recovered.edits/0000000000000000128
2016-12-01 19:15:07,234 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,234 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,255 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,259 INFO  [StoreOpener-d8e5258e5ee4f6c1a61d91bd224d0bfa-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=27, currentSize=1482840, freeSize=1287007400, maxSize=1288490240, heapSize=1482840, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,259 INFO  [StoreOpener-d8e5258e5ee4f6c1a61d91bd224d0bfa-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,261 INFO  [StoreOpener-d8e5258e5ee4f6c1a61d91bd224d0bfa-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=27, currentSize=1482840, freeSize=1287007400, maxSize=1288490240, heapSize=1482840, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,261 INFO  [StoreOpener-d8e5258e5ee4f6c1a61d91bd224d0bfa-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,270 INFO  [StoreFileOpenerThread-V-1] compress.CodecPool: Got brand-new decompressor [.snappy]
2016-12-01 19:15:07,278 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/d8e5258e5ee4f6c1a61d91bd224d0bfa/recovered.edits/0000000000000000011
2016-12-01 19:15:07,293 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined eb5cf006e72f3e59c033f8023a559abb; next sequenceid=129
2016-12-01 19:15:07,293 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:07,302 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Started memstore flush for splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa., current region memstore size 1.12 KB, and 2/2 column families' memstores are being flushed.; wal is null, using passed sequenceid=11
2016-12-01 19:15:07,315 INFO  [PostOpenDeployTasks:eb5cf006e72f3e59c033f8023a559abb] regionserver.HRegionServer: Post open deploy tasks for PROCEDURE,2,1479977635472.eb5cf006e72f3e59c033f8023a559abb.
2016-12-01 19:15:07,321 INFO  [PostOpenDeployTasks:eb5cf006e72f3e59c033f8023a559abb] hbase.MetaTableAccessor: Updated row PROCEDURE,2,1479977635472.eb5cf006e72f3e59c033f8023a559abb. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,329 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] compress.CodecPool: Got brand-new compressor [.snappy]
2016-12-01 19:15:07,376 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=11, memsize=1.1 K, hasBloomFilter=true, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/d8e5258e5ee4f6c1a61d91bd224d0bfa/.tmp/8dfb002b7760434e948ca974728bb58c
2016-12-01 19:15:07,415 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/d8e5258e5ee4f6c1a61d91bd224d0bfa/V/8dfb002b7760434e948ca974728bb58c, entries=7, sequenceid=11, filesize=4.9 K
2016-12-01 19:15:07,416 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Finished memstore flush of ~1.12 KB/1144, currentsize=0 B/0 for region splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa. in 114ms, sequenceid=11, compaction requested=false; wal=null
2016-12-01 19:15:07,436 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,436 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,454 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,454 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,455 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,458 INFO  [StoreOpener-b9b2c4cfa770388f4d26e83953c2e495-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,459 INFO  [StoreOpener-b9b2c4cfa770388f4d26e83953c2e495-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,461 INFO  [StoreOpener-b9b2c4cfa770388f4d26e83953c2e495-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,461 INFO  [StoreOpener-b9b2c4cfa770388f4d26e83953c2e495-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,467 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/b9b2c4cfa770388f4d26e83953c2e495/recovered.edits/0000000000000000005
2016-12-01 19:15:07,492 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined d8e5258e5ee4f6c1a61d91bd224d0bfa; next sequenceid=12
2016-12-01 19:15:07,509 INFO  [PostOpenDeployTasks:d8e5258e5ee4f6c1a61d91bd224d0bfa] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa.
2016-12-01 19:15:07,518 INFO  [PostOpenDeployTasks:d8e5258e5ee4f6c1a61d91bd224d0bfa] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,552 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined b9b2c4cfa770388f4d26e83953c2e495; next sequenceid=6
2016-12-01 19:15:07,567 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,567 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,568 INFO  [PostOpenDeployTasks:b9b2c4cfa770388f4d26e83953c2e495] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495.
2016-12-01 19:15:07,574 INFO  [PostOpenDeployTasks:b9b2c4cfa770388f4d26e83953c2e495] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,584 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,588 INFO  [StoreOpener-443b9beecbf6a1a3264edb20b2230a52-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,588 INFO  [StoreOpener-443b9beecbf6a1a3264edb20b2230a52-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,591 INFO  [StoreOpener-443b9beecbf6a1a3264edb20b2230a52-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,592 INFO  [StoreOpener-443b9beecbf6a1a3264edb20b2230a52-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,600 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/443b9beecbf6a1a3264edb20b2230a52/recovered.edits/0000000000000000005
2016-12-01 19:15:07,649 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,649 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,663 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:07,663 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null15.1480599884369, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:07,671 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,671 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,672 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,672 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,672 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,672 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,672 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,675 INFO  [StoreOpener-067a3d14297c94e8708fe56780f1443b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,676 INFO  [StoreOpener-067a3d14297c94e8708fe56780f1443b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,678 INFO  [StoreOpener-067a3d14297c94e8708fe56780f1443b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,679 INFO  [StoreOpener-067a3d14297c94e8708fe56780f1443b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,685 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null15.1480599884369 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,685 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@2160a9a5 in 674ms
2016-12-01 19:15:07,686 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/067a3d14297c94e8708fe56780f1443b/recovered.edits/0000000000000000005
2016-12-01 19:15:07,694 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 443b9beecbf6a1a3264edb20b2230a52; next sequenceid=6
2016-12-01 19:15:07,700 INFO  [PostOpenDeployTasks:443b9beecbf6a1a3264edb20b2230a52] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52.
2016-12-01 19:15:07,703 INFO  [PostOpenDeployTasks:41e9e82ad28787febb776a2cd511592e] regionserver.HRegionServer: Post open deploy tasks for hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e.
2016-12-01 19:15:07,704 INFO  [PostOpenDeployTasks:443b9beecbf6a1a3264edb20b2230a52] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,719 INFO  [PostOpenDeployTasks:41e9e82ad28787febb776a2cd511592e] hbase.MetaTableAccessor: Updated row hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,724 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.HRegion: Starting compaction on l in region hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e.
2016-12-01 19:15:07,724 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.HStore: Starting compaction of 5 file(s) in l of hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e. into tmpdir=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/acl/41e9e82ad28787febb776a2cd511592e/.tmp, totalSize=24.7 K
2016-12-01 19:15:07,730 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=28, currentSize=1483464, freeSize=1287006776, maxSize=1288490240, heapSize=1483464, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,754 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,755 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,784 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,787 INFO  [StoreOpener-a89555c052f2e4a92d6c1feb6047dbb4-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,788 INFO  [StoreOpener-a89555c052f2e4a92d6c1feb6047dbb4-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,790 INFO  [StoreOpener-a89555c052f2e4a92d6c1feb6047dbb4-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,790 INFO  [StoreOpener-a89555c052f2e4a92d6c1feb6047dbb4-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,794 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 067a3d14297c94e8708fe56780f1443b; next sequenceid=6
2016-12-01 19:15:07,796 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/a89555c052f2e4a92d6c1feb6047dbb4/recovered.edits/0000000000000000005
2016-12-01 19:15:07,800 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,800 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,816 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null6.1480599883504
2016-12-01 19:15:07,817 INFO  [PostOpenDeployTasks:067a3d14297c94e8708fe56780f1443b] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0E,1480593676447.067a3d14297c94e8708fe56780f1443b.
2016-12-01 19:15:07,822 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,823 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,824 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,826 INFO  [PostOpenDeployTasks:067a3d14297c94e8708fe56780f1443b] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0E,1480593676447.067a3d14297c94e8708fe56780f1443b. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,829 INFO  [StoreOpener-4ae4c5ba5fb97295e3d04f32627b110f-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,829 INFO  [StoreOpener-4ae4c5ba5fb97295e3d04f32627b110f-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,844 INFO  [StoreOpener-4ae4c5ba5fb97295e3d04f32627b110f-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,844 INFO  [StoreOpener-4ae4c5ba5fb97295e3d04f32627b110f-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,849 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null6.1480599883504, length=91
2016-12-01 19:15:07,849 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:07,854 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/4ae4c5ba5fb97295e3d04f32627b110f/recovered.edits/0000000000000000007
2016-12-01 19:15:07,856 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 58e440a5253b470b95cb1d86a9791815
2016-12-01 19:15:07,866 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null6.1480599883504
2016-12-01 19:15:07,867 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null6.1480599883504 after 1ms
2016-12-01 19:15:07,879 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.StoreFile$Reader: Loaded Delete Family Bloom (CompoundBloomFilter) metadata for 58e440a5253b470b95cb1d86a9791815
2016-12-01 19:15:07,883 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:07,883 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:07,896 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined a89555c052f2e4a92d6c1feb6047dbb4; next sequenceid=6
2016-12-01 19:15:07,904 INFO  [PostOpenDeployTasks:a89555c052f2e4a92d6c1feb6047dbb4] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4.
2016-12-01 19:15:07,908 INFO  [PostOpenDeployTasks:a89555c052f2e4a92d6c1feb6047dbb4] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:07,912 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:07,915 INFO  [StoreOpener-ca8320df34149b34da1b6a0aaa668b5a-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,915 INFO  [StoreOpener-ca8320df34149b34da1b6a0aaa668b5a-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,918 INFO  [StoreOpener-ca8320df34149b34da1b6a0aaa668b5a-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:07,918 INFO  [StoreOpener-ca8320df34149b34da1b6a0aaa668b5a-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:07,926 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/ca8320df34149b34da1b6a0aaa668b5a/recovered.edits/0000000000000000007
2016-12-01 19:15:07,947 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 4ae4c5ba5fb97295e3d04f32627b110f; next sequenceid=8
2016-12-01 19:15:07,952 INFO  [PostOpenDeployTasks:4ae4c5ba5fb97295e3d04f32627b110f] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f.
2016-12-01 19:15:07,956 INFO  [PostOpenDeployTasks:4ae4c5ba5fb97295e3d04f32627b110f] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:07,979 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:07,980 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null6.1480599883504, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:08,009 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null6.1480599883504 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:08,009 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@2e768a74 in 193ms
2016-12-01 19:15:08,146 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined ca8320df34149b34da1b6a0aaa668b5a; next sequenceid=8
2016-12-01 19:15:08,165 INFO  [PostOpenDeployTasks:ca8320df34149b34da1b6a0aaa668b5a] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0B,1480593676447.ca8320df34149b34da1b6a0aaa668b5a.
2016-12-01 19:15:08,172 INFO  [PostOpenDeployTasks:ca8320df34149b34da1b6a0aaa668b5a] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0B,1480593676447.ca8320df34149b34da1b6a0aaa668b5a. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:08,271 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.HStore: Completed compaction of 5 (all) file(s) in l of hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e. into 58e440a5253b470b95cb1d86a9791815(size=6.3 K), total size for store is 6.3 K. This selection was in queue for 0sec, and took 0sec to execute.
2016-12-01 19:15:08,276 INFO  [regionserver/hscale-dev1-dn1/10.60.70.11:16020-shortCompactions-1480599907715] regionserver.CompactSplitThread: Completed compaction: Request = regionName=hbase:acl,,1475487105709.41e9e82ad28787febb776a2cd511592e., storeName=l, fileCount=5, fileSize=24.7 K, priority=15, time=5346388253629703; duration=0sec
2016-12-01 19:15:08,621 INFO  [PriorityRpcServer.handler=4,queue=0,port=16020] regionserver.RSRpcServices: Open SPLICE_INIT,,1480599804578.86750642ac87f34b3e9424ac3d51787b.
2016-12-01 19:15:08,643 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:08,643 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:08,669 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:08,687 INFO  [StoreOpener-86750642ac87f34b3e9424ac3d51787b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=29, currentSize=1484104, freeSize=1287006136, maxSize=1288490240, heapSize=1484104, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:08,688 INFO  [StoreOpener-86750642ac87f34b3e9424ac3d51787b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:08,730 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 86750642ac87f34b3e9424ac3d51787b; next sequenceid=2
2016-12-01 19:15:08,748 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null5.1480599883449
2016-12-01 19:15:08,748 INFO  [PostOpenDeployTasks:86750642ac87f34b3e9424ac3d51787b] regionserver.HRegionServer: Post open deploy tasks for SPLICE_INIT,,1480599804578.86750642ac87f34b3e9424ac3d51787b.
2016-12-01 19:15:08,757 INFO  [PostOpenDeployTasks:86750642ac87f34b3e9424ac3d51787b] hbase.MetaTableAccessor: Updated row SPLICE_INIT,,1480599804578.86750642ac87f34b3e9424ac3d51787b. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:08,774 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null5.1480599883449, length=91
2016-12-01 19:15:08,774 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:08,791 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null5.1480599883449
2016-12-01 19:15:08,793 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null5.1480599883449 after 2ms
2016-12-01 19:15:08,873 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:08,873 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null5.1480599883449, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:08,890 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null5.1480599883449 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:08,890 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@6927d38d in 142ms
2016-12-01 19:15:09,217 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x4ed69ffc connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:15:09,217 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x4ed69ffc0x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:15:09,220 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020-SendThread(hscale-dev1-dn3:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:15:09,221 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020-SendThread(hscale-dev1-dn3:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn3/10.60.70.13:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:15:09,222 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020-SendThread(hscale-dev1-dn3:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn3/10.60.70.13:2181, initiating session
2016-12-01 19:15:09,240 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020-SendThread(hscale-dev1-dn3:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn3/10.60.70.13:2181, sessionid = 0x358ba9a25c10019, negotiated timeout = 120000
2016-12-01 19:15:09,256 WARN  [PriorityRpcServer.handler=7,queue=1,port=16020] hbase.HBaseConfiguration: Config option "hbase.regionserver.lease.period" is deprecated. Instead, use "hbase.client.scanner.timeout.period"
2016-12-01 19:15:09,277 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x358ba9a25c10019
2016-12-01 19:15:09,294 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] zookeeper.ZooKeeper: Session: 0x358ba9a25c10019 closed
2016-12-01 19:15:09,294 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-01 19:15:09,429 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null1.1480599883086
2016-12-01 19:15:09,456 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null1.1480599883086, length=91
2016-12-01 19:15:09,456 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:09,473 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null1.1480599883086
2016-12-01 19:15:09,474 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null1.1480599883086 after 1ms
2016-12-01 19:15:09,520 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:09,520 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null1.1480599883086, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:09,534 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null1.1480599883086 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:09,534 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@532974ea in 104ms
2016-12-01 19:15:10,262 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null13.1480599884304
2016-12-01 19:15:10,292 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null13.1480599884304, length=91
2016-12-01 19:15:10,292 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:10,312 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null13.1480599884304
2016-12-01 19:15:10,313 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null13.1480599884304 after 1ms
2016-12-01 19:15:10,363 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:10,364 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null13.1480599884304, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:10,381 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null13.1480599884304 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:10,381 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1cd413c3 in 119ms
2016-12-01 19:15:11,076 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null10.1480599884013
2016-12-01 19:15:11,103 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null10.1480599884013, length=91
2016-12-01 19:15:11,103 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:11,120 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null10.1480599884013
2016-12-01 19:15:11,122 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null10.1480599884013 after 1ms
2016-12-01 19:15:11,169 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:11,169 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null10.1480599884013, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:11,183 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null10.1480599884013 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:11,183 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1ff22d2c in 107ms
2016-12-01 19:15:11,599 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null8.1480599883830
2016-12-01 19:15:11,624 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null8.1480599883830, length=91
2016-12-01 19:15:11,624 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:11,636 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null8.1480599883830
2016-12-01 19:15:11,638 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null8.1480599883830 after 2ms
2016-12-01 19:15:11,683 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:11,683 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null8.1480599883830, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:11,699 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null8.1480599883830 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:11,699 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@35792895 in 100ms
2016-12-01 19:15:12,394 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null5.1480599883408
2016-12-01 19:15:12,409 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null5.1480599883408, length=91
2016-12-01 19:15:12,409 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:12,423 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null5.1480599883408
2016-12-01 19:15:12,424 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null5.1480599883408 after 1ms
2016-12-01 19:15:12,467 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:12,467 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null5.1480599883408, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:12,485 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null5.1480599883408 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:12,485 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@130b58f0 in 91ms
2016-12-01 19:15:13,183 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null6.1480599883526
2016-12-01 19:15:13,212 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null6.1480599883526, length=91
2016-12-01 19:15:13,212 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:13,229 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null6.1480599883526
2016-12-01 19:15:13,231 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null6.1480599883526 after 2ms
2016-12-01 19:15:13,276 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:13,276 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null6.1480599883526, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:13,295 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null6.1480599883526 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:13,295 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7917bd10 in 112ms
2016-12-01 19:15:13,938 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null4.1480599883438
2016-12-01 19:15:14,050 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null4.1480599883438, length=380
2016-12-01 19:15:14,050 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:14,067 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null4.1480599883438
2016-12-01 19:15:14,068 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null4.1480599883438 after 1ms
2016-12-01 19:15:14,134 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000009.temp region=720f27c20e300e2c5bc7b5d3b8eddcbf
2016-12-01 19:15:14,134 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:14,215 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000009.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000009
2016-12-01 19:15:14,215 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null4.1480599883438, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:14,230 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null4.1480599883438 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:14,231 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@3bdec9c7 in 293ms
2016-12-01 19:15:14,600 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null0.1480599882799
2016-12-01 19:15:14,625 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null0.1480599882799, length=91
2016-12-01 19:15:14,625 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:14,641 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null0.1480599882799
2016-12-01 19:15:14,643 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null0.1480599882799 after 2ms
2016-12-01 19:15:14,688 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:14,688 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null0.1480599882799, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:14,708 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null0.1480599882799 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:14,708 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@788c4bd5 in 108ms
2016-12-01 19:15:15,142 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null13.1480599884178
2016-12-01 19:15:15,163 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null13.1480599884178, length=91
2016-12-01 19:15:15,163 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:15,180 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null13.1480599884178
2016-12-01 19:15:15,181 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null13.1480599884178 after 1ms
2016-12-01 19:15:15,225 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:15,225 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null13.1480599884178, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:15,244 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null13.1480599884178 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:15,244 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@23a9863a in 102ms
2016-12-01 19:15:15,961 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null12.1480599884046
2016-12-01 19:15:15,986 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null12.1480599884046, length=91
2016-12-01 19:15:15,986 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:16,003 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null12.1480599884046
2016-12-01 19:15:16,004 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null12.1480599884046 after 1ms
2016-12-01 19:15:16,048 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:16,048 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null12.1480599884046, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:16,071 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null12.1480599884046 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:16,071 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@17898b4f in 110ms
2016-12-01 19:15:16,534 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null0.1480599882919
2016-12-01 19:15:16,562 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null0.1480599882919, length=91
2016-12-01 19:15:16,562 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:16,579 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null0.1480599882919
2016-12-01 19:15:16,580 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null0.1480599882919 after 1ms
2016-12-01 19:15:16,621 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:16,622 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null0.1480599882919, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:16,635 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null0.1480599882919 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:16,635 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@60a0350c in 101ms
2016-12-01 19:15:17,235 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null9.1480599883929
2016-12-01 19:15:17,257 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null9.1480599883929, length=91
2016-12-01 19:15:17,257 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:17,271 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null9.1480599883929
2016-12-01 19:15:17,273 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null9.1480599883929 after 2ms
2016-12-01 19:15:17,317 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:17,317 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null9.1480599883929, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:17,333 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null9.1480599883929 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:17,333 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5b8c8380 in 97ms
2016-12-01 19:15:17,892 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null7.1480599883746
2016-12-01 19:15:17,921 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null7.1480599883746, length=401
2016-12-01 19:15:17,921 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:17,938 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null7.1480599883746
2016-12-01 19:15:17,940 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null7.1480599883746 after 2ms
2016-12-01 19:15:18,004 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000434.temp region=fa9ccd67af9c529bf1fab5a2893825af
2016-12-01 19:15:18,004 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:18,085 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000434.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000434
2016-12-01 19:15:18,085 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null7.1480599883746, length=401, corrupted=false, progress failed=false
2016-12-01 19:15:18,100 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null7.1480599883746 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:18,100 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@45a24e85 in 208ms
2016-12-01 19:15:18,504 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null4.1480599883357
2016-12-01 19:15:18,532 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null4.1480599883357, length=91
2016-12-01 19:15:18,532 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:18,549 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null4.1480599883357
2016-12-01 19:15:18,550 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null4.1480599883357 after 1ms
2016-12-01 19:15:18,592 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:18,592 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null4.1480599883357, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:18,640 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null4.1480599883357 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:18,640 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@660c851b in 136ms
2016-12-01 19:15:19,157 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null6.1480599883649
2016-12-01 19:15:19,173 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null6.1480599883649, length=389
2016-12-01 19:15:19,173 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:19,188 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null6.1480599883649
2016-12-01 19:15:19,190 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null6.1480599883649 after 2ms
2016-12-01 19:15:19,267 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000007.temp region=045c57a37dbbdf8427895346f2ea2e0c
2016-12-01 19:15:19,267 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:19,332 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000007
2016-12-01 19:15:19,333 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null6.1480599883649, length=389, corrupted=false, progress failed=false
2016-12-01 19:15:19,350 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null6.1480599883649 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:19,350 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@732d7d6f in 192ms
2016-12-01 19:15:19,907 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null3.1480599883340
2016-12-01 19:15:19,933 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null3.1480599883340, length=386
2016-12-01 19:15:19,933 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:19,949 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null3.1480599883340
2016-12-01 19:15:19,950 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null3.1480599883340 after 1ms
2016-12-01 19:15:20,056 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000104.temp region=2ca0f5757a70a75a2dfac9e2b8e8de14
2016-12-01 19:15:20,057 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:20,128 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000104.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000104
2016-12-01 19:15:20,128 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null3.1480599883340, length=386, corrupted=false, progress failed=false
2016-12-01 19:15:20,142 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null3.1480599883340 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:20,142 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@69ad343f in 235ms
2016-12-01 19:15:20,584 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null7.1480599883593
2016-12-01 19:15:20,612 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null7.1480599883593, length=91
2016-12-01 19:15:20,612 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:20,628 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null7.1480599883593
2016-12-01 19:15:20,629 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null7.1480599883593 after 1ms
2016-12-01 19:15:20,665 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:20,665 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null7.1480599883593, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:20,682 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null7.1480599883593 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:20,682 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@21ac5954 in 98ms
2016-12-01 19:15:21,259 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null4.1480599883309
2016-12-01 19:15:21,277 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null4.1480599883309, length=91
2016-12-01 19:15:21,278 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:21,296 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null4.1480599883309
2016-12-01 19:15:21,297 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null4.1480599883309 after 1ms
2016-12-01 19:15:21,340 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:21,340 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null4.1480599883309, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:21,358 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null4.1480599883309 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:21,358 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5a313f4 in 99ms
2016-12-01 19:15:22,046 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null10.1480599883854
2016-12-01 19:15:22,072 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null10.1480599883854, length=91
2016-12-01 19:15:22,072 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:22,091 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null10.1480599883854
2016-12-01 19:15:22,092 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null10.1480599883854 after 1ms
2016-12-01 19:15:22,140 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:22,140 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null10.1480599883854, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:22,158 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null10.1480599883854 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:22,158 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@49c5f039 in 112ms
2016-12-01 19:15:22,357 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:15:22,773 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null2.1480599883239
2016-12-01 19:15:22,802 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null2.1480599883239, length=422
2016-12-01 19:15:22,802 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:22,820 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null2.1480599883239
2016-12-01 19:15:22,821 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null2.1480599883239 after 1ms
2016-12-01 19:15:22,892 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/436b641f523cc9e1add60225998a4a4b/recovered.edits/0000000000000000171.temp region=436b641f523cc9e1add60225998a4a4b
2016-12-01 19:15:22,892 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:22,978 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/436b641f523cc9e1add60225998a4a4b/recovered.edits/0000000000000000171.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/436b641f523cc9e1add60225998a4a4b/recovered.edits/0000000000000000171
2016-12-01 19:15:22,978 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null2.1480599883239, length=422, corrupted=false, progress failed=false
2016-12-01 19:15:23,022 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null2.1480599883239 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:23,022 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@4f1c0be7 in 249ms
2016-12-01 19:15:23,315 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null2.1480599883174
2016-12-01 19:15:23,331 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null2.1480599883174, length=91
2016-12-01 19:15:23,331 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:23,348 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null2.1480599883174
2016-12-01 19:15:23,350 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null2.1480599883174 after 1ms
2016-12-01 19:15:23,391 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:23,391 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null2.1480599883174, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:23,407 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null2.1480599883174 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:23,407 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@4c665f6b in 92ms
2016-12-01 19:15:24,158 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null0.1480599882821
2016-12-01 19:15:24,179 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null0.1480599882821, length=91
2016-12-01 19:15:24,179 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:24,193 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null0.1480599882821
2016-12-01 19:15:24,194 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null0.1480599882821 after 0ms
2016-12-01 19:15:24,233 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:24,233 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null0.1480599882821, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:24,249 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null0.1480599882821 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:24,249 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@3f26fddc in 91ms
2016-12-01 19:15:24,965 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null5.1480599883532
2016-12-01 19:15:24,994 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null5.1480599883532, length=380
2016-12-01 19:15:24,994 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:25,012 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null5.1480599883532
2016-12-01 19:15:25,013 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null5.1480599883532 after 1ms
2016-12-01 19:15:25,176 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000007.temp region=04d5ffef435a1e4041af8895340de6ae
2016-12-01 19:15:25,176 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:25,257 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000007
2016-12-01 19:15:25,257 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null5.1480599883532, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:25,259 INFO  [pool-10-thread-1] client.HBaseAdmin: Created SPLICE_INIT
2016-12-01 19:15:25,300 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null5.1480599883532 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:25,300 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@66363208 in 335ms
2016-12-01 19:15:25,867 INFO  [pool-10-thread-1] db.SpliceDatabase: Booting the Splice Machine database
2016-12-01 19:15:25,918 ERROR [pool-10-thread-1] lifecycle.DatabaseLifecycleManager: Error during during startup of service com.splicemachine.derby.lifecycle.MonitoredLifecycleService@4b6f52d6:
java.sql.SQLException: Failed to start database 'splicedb' with class loader sun.misc.Launcher$AppClassLoader@18b4aac2, see the next exception for details.
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:102)
    at com.splicemachine.db.impl.jdbc.Util.newEmbedSQLException(Util.java:170)
    at com.splicemachine.db.impl.jdbc.Util.seeNextException(Util.java:306)
    at com.splicemachine.db.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:2326)
    at com.splicemachine.db.impl.jdbc.EmbedConnection.<init>(EmbedConnection.java:302)
    at com.splicemachine.db.impl.jdbc.EmbedConnection30.<init>(EmbedConnection30.java:72)
    at com.splicemachine.db.impl.jdbc.EmbedConnection40.<init>(EmbedConnection40.java:57)
    at com.splicemachine.db.jdbc.Driver40.getNewEmbedConnection(Driver40.java:69)
    at com.splicemachine.db.jdbc.InternalDriver.connect(InternalDriver.java:256)
    at com.splicemachine.db.jdbc.EmbeddedDriver.connect(EmbeddedDriver.java:125)
    at com.splicemachine.tools.EmbedConnectionMaker.createNew(EmbedConnectionMaker.java:42)
    at com.splicemachine.derby.lifecycle.EngineLifecycleService.start(EngineLifecycleService.java:98)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:229)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Failed to start database 'splicedb' with class loader sun.misc.Launcher$AppClassLoader@18b4aac2, see the next exception for details.
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:46)
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(SQLExceptionFactory40.java:126)
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:75)
    ... 16 more
Caused by: java.sql.SQLException: Java exception: ': java.lang.NullPointerException'.
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory.getSQLException(SQLExceptionFactory.java:46)
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(SQLExceptionFactory40.java:126)
    at com.splicemachine.db.impl.jdbc.SQLExceptionFactory40.getSQLException(SQLExceptionFactory40.java:75)
    at com.splicemachine.db.impl.jdbc.Util.newEmbedSQLException(Util.java:170)
    at com.splicemachine.db.impl.jdbc.Util.javaException(Util.java:327)
    at com.splicemachine.db.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:2322)
    ... 13 more
Caused by: java.lang.NullPointerException
    at com.splicemachine.pipeline.Exceptions.parseException(Exceptions.java:37)
    at com.splicemachine.derby.impl.sql.ZkPropertyManager.getProperty(ZkPropertyManager.java:65)
    at com.splicemachine.derby.impl.store.access.PropertyConglomerate.<init>(PropertyConglomerate.java:64)
    at com.splicemachine.derby.impl.store.access.SpliceAccessManager.boot(SpliceAccessManager.java:687)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
    at com.splicemachine.db.impl.services.monitor.TopService.bootModule(TopService.java:337)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.startModule(BaseMonitor.java:545)
    at com.splicemachine.db.impl.services.monitor.FileMonitor.startModule(FileMonitor.java:51)
    at com.splicemachine.db.iapi.services.monitor.Monitor.bootServiceModule(Monitor.java:430)
    at com.splicemachine.derby.impl.db.SpliceDatabase.bootStore(SpliceDatabase.java:447)
    at com.splicemachine.db.impl.db.BasicDatabase.boot(BasicDatabase.java:164)
    at com.splicemachine.derby.impl.db.SpliceDatabase.boot(SpliceDatabase.java:115)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.boot(BaseMonitor.java:1996)
    at com.splicemachine.db.impl.services.monitor.TopService.bootModule(TopService.java:337)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.bootService(BaseMonitor.java:1830)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.startProviderService(BaseMonitor.java:1696)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.findProviderAndStartService(BaseMonitor.java:1574)
    at com.splicemachine.db.impl.services.monitor.BaseMonitor.startPersistentService(BaseMonitor.java:993)
    at com.splicemachine.db.iapi.services.monitor.Monitor.startPersistentService(Monitor.java:553)
    at com.splicemachine.db.impl.jdbc.EmbedConnection.bootDatabase(EmbedConnection.java:2287)
    ... 13 more
2016-12-01 19:15:25,920 ERROR [pool-10-thread-1] lifecycle.DatabaseLifecycleManager: Error during shutdown of service com.splicemachine.derby.lifecycle.NetworkLifecycleService@3577a35a:
java.lang.NullPointerException
    at com.splicemachine.derby.lifecycle.NetworkLifecycleService.shutdown(NetworkLifecycleService.java:65)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Shutdown.run(DatabaseLifecycleManager.java:268)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.bootServices(DatabaseLifecycleManager.java:233)
    at com.splicemachine.lifecycle.DatabaseLifecycleManager$Startup.run(DatabaseLifecycleManager.java:220)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
2016-12-01 19:15:25,920 INFO  [pool-10-thread-1] impl.TimestampClient: shutting down TimestampClient state=SHUTDOWN
2016-12-01 19:15:25,973 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null10.1480599883899
2016-12-01 19:15:26,002 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null10.1480599883899, length=91
2016-12-01 19:15:26,002 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:26,020 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null10.1480599883899
2016-12-01 19:15:26,022 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null10.1480599883899 after 2ms
2016-12-01 19:15:26,066 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:26,066 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null10.1480599883899, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:26,082 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null10.1480599883899 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:26,082 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5ab66858 in 109ms
2016-12-01 19:15:26,891 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null14.1480599884236
2016-12-01 19:15:26,918 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null14.1480599884236, length=91
2016-12-01 19:15:26,918 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:26,936 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null14.1480599884236
2016-12-01 19:15:26,937 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null14.1480599884236 after 1ms
2016-12-01 19:15:26,982 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:26,982 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null14.1480599884236, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:26,999 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null14.1480599884236 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:26,999 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@182e96a0 in 108ms
2016-12-01 19:15:27,484 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null12.1480599884212
2016-12-01 19:15:27,512 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null12.1480599884212, length=91
2016-12-01 19:15:27,512 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:27,530 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null12.1480599884212
2016-12-01 19:15:27,531 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null12.1480599884212 after 1ms
2016-12-01 19:15:27,572 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:27,572 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null12.1480599884212, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:27,589 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null12.1480599884212 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:27,589 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@193bb7ce in 104ms
2016-12-01 19:15:28,288 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null12.1480599884090
2016-12-01 19:15:28,314 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null12.1480599884090, length=91
2016-12-01 19:15:28,314 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:28,330 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null12.1480599884090
2016-12-01 19:15:28,332 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null12.1480599884090 after 1ms
2016-12-01 19:15:28,372 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:28,372 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null12.1480599884090, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:28,389 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null12.1480599884090 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:28,390 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@390efe9b in 101ms
2016-12-01 19:15:28,935 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null8.1480599883718
2016-12-01 19:15:28,962 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null8.1480599883718, length=91
2016-12-01 19:15:28,962 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:28,978 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null8.1480599883718
2016-12-01 19:15:28,980 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null8.1480599883718 after 2ms
2016-12-01 19:15:29,028 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:29,028 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null8.1480599883718, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:29,045 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null8.1480599883718 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:29,045 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@666a5ff5 in 110ms
2016-12-01 19:15:29,767 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null11.1480599883952
2016-12-01 19:15:29,782 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null11.1480599883952, length=91
2016-12-01 19:15:29,782 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:29,800 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null11.1480599883952
2016-12-01 19:15:29,801 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null11.1480599883952 after 1ms
2016-12-01 19:15:29,843 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:29,844 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null11.1480599883952, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:29,862 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null11.1480599883952 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:29,862 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1dd2e3cb in 95ms
2016-12-01 19:15:30,694 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null9.1480599883803
2016-12-01 19:15:30,723 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null9.1480599883803, length=91
2016-12-01 19:15:30,723 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:30,740 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null9.1480599883803
2016-12-01 19:15:30,741 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null9.1480599883803 after 1ms
2016-12-01 19:15:30,779 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:30,780 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null9.1480599883803, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:30,796 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null9.1480599883803 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:30,796 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@4990c9a0 in 101ms
2016-12-01 19:15:31,372 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null1.1480599883037
2016-12-01 19:15:31,394 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null1.1480599883037, length=415
2016-12-01 19:15:31,394 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:31,411 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null1.1480599883037
2016-12-01 19:15:31,412 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null1.1480599883037 after 1ms
2016-12-01 19:15:31,489 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000136.temp region=bb61d57cfdba9c2670d6050fc59581c6
2016-12-01 19:15:31,489 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:31,569 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000136.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000136
2016-12-01 19:15:31,569 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null1.1480599883037, length=415, corrupted=false, progress failed=false
2016-12-01 19:15:31,586 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null1.1480599883037 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:31,586 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5e7acffd in 214ms
2016-12-01 19:15:32,256 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null2.1480599883118
2016-12-01 19:15:32,283 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null2.1480599883118, length=380
2016-12-01 19:15:32,283 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:32,302 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null2.1480599883118
2016-12-01 19:15:32,303 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null2.1480599883118 after 1ms
2016-12-01 19:15:32,376 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000009.temp region=64d62e6d820b30bc90af9615c4188533
2016-12-01 19:15:32,376 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:32,448 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000009.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000009
2016-12-01 19:15:32,448 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null2.1480599883118, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:32,466 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null2.1480599883118 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:32,466 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@57e4580d in 209ms
2016-12-01 19:15:33,120 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null11.1480599883994
2016-12-01 19:15:33,135 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null11.1480599883994, length=91
2016-12-01 19:15:33,135 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:33,150 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null11.1480599883994
2016-12-01 19:15:33,152 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null11.1480599883994 after 2ms
2016-12-01 19:15:33,200 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:33,200 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null11.1480599883994, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:33,214 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null11.1480599883994 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:33,214 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@41ccf267 in 93ms
2016-12-01 19:15:33,729 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null7.1480599883635
2016-12-01 19:15:33,753 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null7.1480599883635, length=91
2016-12-01 19:15:33,753 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:33,770 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null7.1480599883635
2016-12-01 19:15:33,771 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null7.1480599883635 after 1ms
2016-12-01 19:15:33,822 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:33,822 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null7.1480599883635, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:33,839 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null7.1480599883635 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:33,839 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7677809f in 110ms
2016-12-01 19:15:34,701 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null14.1480599884282
2016-12-01 19:15:34,716 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null14.1480599884282, length=91
2016-12-01 19:15:34,716 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:34,735 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null14.1480599884282
2016-12-01 19:15:34,737 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null14.1480599884282 after 2ms
2016-12-01 19:15:34,785 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:34,785 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null14.1480599884282, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:34,801 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null14.1480599884282 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:34,801 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@e21f044 in 100ms
2016-12-01 19:15:35,544 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null11.1480599884115
2016-12-01 19:15:35,564 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null11.1480599884115, length=91
2016-12-01 19:15:35,565 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:35,582 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null11.1480599884115
2016-12-01 19:15:35,584 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null11.1480599884115 after 2ms
2016-12-01 19:15:35,632 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:35,632 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null11.1480599884115, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:35,648 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null11.1480599884115 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:35,648 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@1f25fc0a in 104ms
2016-12-01 19:15:36,285 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null15.1480599884517
2016-12-01 19:15:36,300 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null15.1480599884517, length=91
2016-12-01 19:15:36,300 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:36,316 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null15.1480599884517
2016-12-01 19:15:36,317 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null15.1480599884517 after 1ms
2016-12-01 19:15:36,355 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:36,355 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null15.1480599884517, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:36,373 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null15.1480599884517 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:36,373 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7653c4f7 in 88ms
2016-12-01 19:15:37,164 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null14.1480599884412
2016-12-01 19:15:37,192 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null14.1480599884412, length=91
2016-12-01 19:15:37,192 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:37,209 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null14.1480599884412
2016-12-01 19:15:37,216 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null14.1480599884412 after 7ms
2016-12-01 19:15:37,264 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:37,264 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null14.1480599884412, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:37,281 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null14.1480599884412 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:37,281 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@dc29171 in 117ms
2016-12-01 19:15:37,752 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null15.1480599884331
2016-12-01 19:15:37,773 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null15.1480599884331, length=91
2016-12-01 19:15:37,773 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:37,790 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null15.1480599884331
2016-12-01 19:15:37,791 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null15.1480599884331 after 1ms
2016-12-01 19:15:37,837 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:37,837 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null15.1480599884331, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:37,853 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null15.1480599884331 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:37,853 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@5b9c1d01 in 101ms
2016-12-01 19:15:38,409 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null3.1480599883216
2016-12-01 19:15:38,434 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null3.1480599883216, length=380
2016-12-01 19:15:38,434 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:38,449 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null3.1480599883216
2016-12-01 19:15:38,450 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null3.1480599883216 after 1ms
2016-12-01 19:15:38,518 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0-Writer-2] wal.WALSplitter: Creating writer path=hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000007.temp region=aa3bf9854a2e6a06cae52b1cfa2d6754
2016-12-01 19:15:38,519 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:38,592 INFO  [split-log-closeStream-1] wal.WALSplitter: Rename hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000007.temp to hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000007
2016-12-01 19:15:38,592 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 1 edits across 1 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null3.1480599883216, length=380, corrupted=false, progress failed=false
2016-12-01 19:15:38,612 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null3.1480599883216 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:38,612 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@4d4afbd8 in 203ms
2016-12-01 19:15:39,412 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null9.1480599883764
2016-12-01 19:15:39,437 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null9.1480599883764, length=91
2016-12-01 19:15:39,437 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:39,456 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null9.1480599883764
2016-12-01 19:15:39,457 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null9.1480599883764 after 1ms
2016-12-01 19:15:39,505 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:39,505 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null9.1480599883764, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:39,524 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null9.1480599883764 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:39,524 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@725d6b85 in 112ms
2016-12-01 19:15:40,320 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null8.1480599883680
2016-12-01 19:15:40,335 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null8.1480599883680, length=91
2016-12-01 19:15:40,335 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:40,351 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null8.1480599883680
2016-12-01 19:15:40,352 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null8.1480599883680 after 1ms
2016-12-01 19:15:40,395 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:40,395 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn2,16020,1480599823976-splitting/hscale-dev1-dn2%2C16020%2C1480599823976.null8.1480599883680, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:40,412 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn2%2C16020%2C1480599823976-splitting%2Fhscale-dev1-dn2%252C16020%252C1480599823976.null8.1480599883680 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:40,412 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@7e182d3f in 92ms
2016-12-01 19:15:40,509 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] regionserver.RSRpcServices: Open PATIENT,,1479977629367.bb61d57cfdba9c2670d6050fc59581c6.
2016-12-01 19:15:40,514 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] regionserver.RSRpcServices: Open splice:TENTATIVE_DDL,,1480593681064.64d62e6d820b30bc90af9615c4188533.
2016-12-01 19:15:40,534 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x02,1480593676447.aa3bf9854a2e6a06cae52b1cfa2d6754.
2016-12-01 19:15:40,534 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:40,534 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:40,550 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:40,550 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:40,550 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:40,550 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:40,550 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:40,551 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:40,551 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:40,555 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:40,556 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:40,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PATIENT successfully.
2016-12-01 19:15:40,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:40,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PATIENT successfully.
2016-12-01 19:15:40,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:40,564 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PATIENT successfully.
2016-12-01 19:15:40,569 INFO  [StoreOpener-bb61d57cfdba9c2670d6050fc59581c6-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:40,570 INFO  [StoreOpener-bb61d57cfdba9c2670d6050fc59581c6-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:40,571 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:40,573 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:40,573 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:40,574 INFO  [StoreOpener-64d62e6d820b30bc90af9615c4188533-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:40,574 INFO  [StoreOpener-64d62e6d820b30bc90af9615c4188533-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:40,580 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/TENTATIVE_DDL/64d62e6d820b30bc90af9615c4188533/recovered.edits/0000000000000000009
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:40,592 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:40,595 INFO  [StoreOpener-aa3bf9854a2e6a06cae52b1cfa2d6754-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:40,595 INFO  [StoreOpener-aa3bf9854a2e6a06cae52b1cfa2d6754-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:40,597 INFO  [StoreOpener-aa3bf9854a2e6a06cae52b1cfa2d6754-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:40,597 INFO  [StoreOpener-aa3bf9854a2e6a06cae52b1cfa2d6754-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:40,597 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PATIENT/bb61d57cfdba9c2670d6050fc59581c6/recovered.edits/0000000000000000136
2016-12-01 19:15:40,604 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/aa3bf9854a2e6a06cae52b1cfa2d6754/recovered.edits/0000000000000000007
2016-12-01 19:15:40,684 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 64d62e6d820b30bc90af9615c4188533; next sequenceid=10
2016-12-01 19:15:40,690 INFO  [PostOpenDeployTasks:64d62e6d820b30bc90af9615c4188533] regionserver.HRegionServer: Post open deploy tasks for splice:TENTATIVE_DDL,,1480593681064.64d62e6d820b30bc90af9615c4188533.
2016-12-01 19:15:40,697 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined bb61d57cfdba9c2670d6050fc59581c6; next sequenceid=137
2016-12-01 19:15:40,697 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:40,697 INFO  [PostOpenDeployTasks:64d62e6d820b30bc90af9615c4188533] hbase.MetaTableAccessor: Updated row splice:TENTATIVE_DDL,,1480593681064.64d62e6d820b30bc90af9615c4188533. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:40,712 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined aa3bf9854a2e6a06cae52b1cfa2d6754; next sequenceid=8
2016-12-01 19:15:40,716 INFO  [PostOpenDeployTasks:bb61d57cfdba9c2670d6050fc59581c6] regionserver.HRegionServer: Post open deploy tasks for PATIENT,,1479977629367.bb61d57cfdba9c2670d6050fc59581c6.
2016-12-01 19:15:40,718 INFO  [PostOpenDeployTasks:aa3bf9854a2e6a06cae52b1cfa2d6754] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x02,1480593676447.aa3bf9854a2e6a06cae52b1cfa2d6754.
2016-12-01 19:15:40,723 INFO  [PostOpenDeployTasks:bb61d57cfdba9c2670d6050fc59581c6] hbase.MetaTableAccessor: Updated row PATIENT,,1479977629367.bb61d57cfdba9c2670d6050fc59581c6. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:40,725 INFO  [PostOpenDeployTasks:aa3bf9854a2e6a06cae52b1cfa2d6754] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x02,1480593676447.aa3bf9854a2e6a06cae52b1cfa2d6754. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:40,892 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null1.1480599883150
2016-12-01 19:15:40,910 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null1.1480599883150, length=91
2016-12-01 19:15:40,910 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:40,930 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null1.1480599883150
2016-12-01 19:15:40,932 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null1.1480599883150 after 2ms
2016-12-01 19:15:40,982 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Split writers finished
2016-12-01 19:15:40,982 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn3,16020,1480599826952-splitting/hscale-dev1-dn3%2C16020%2C1480599826952.null1.1480599883150, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:41,002 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn3%2C16020%2C1480599826952-splitting%2Fhscale-dev1-dn3%252C16020%252C1480599826952.null1.1480599883150 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,002 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-1] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@6540feef in 110ms
2016-12-01 19:15:41,085 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open ENCOUNTER,3,1479977632429.436b641f523cc9e1add60225998a4a4b.
2016-12-01 19:15:41,093 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_SEQUENCES,,1480593685685.045c57a37dbbdf8427895346f2ea2e0c.
2016-12-01 19:15:41,112 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open PROCEDURE,3,1479977635472.2ca0f5757a70a75a2dfac9e2b8e8de14.
2016-12-01 19:15:41,113 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,113 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,130 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf.
2016-12-01 19:15:41,131 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,131 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,132 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,138 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of ENCOUNTER successfully.
2016-12-01 19:15:41,138 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:41,138 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of ENCOUNTER successfully.
2016-12-01 19:15:41,138 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:41,138 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of ENCOUNTER successfully.
2016-12-01 19:15:41,142 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open splice:SPLICE_TXN,\x0C,1480593676447.04d5ffef435a1e4041af8895340de6ae.
2016-12-01 19:15:41,143 INFO  [StoreOpener-436b641f523cc9e1add60225998a4a4b-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,143 INFO  [StoreOpener-436b641f523cc9e1add60225998a4a4b-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,143 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,144 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,146 INFO  [StoreOpener-045c57a37dbbdf8427895346f2ea2e0c-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,146 INFO  [StoreOpener-045c57a37dbbdf8427895346f2ea2e0c-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,152 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_SEQUENCES/045c57a37dbbdf8427895346f2ea2e0c/recovered.edits/0000000000000000007
2016-12-01 19:15:41,159 INFO  [PriorityRpcServer.handler=3,queue=1,port=16020] regionserver.RSRpcServices: Open DD_ENTITY_DEF,,1479977408920.fa9ccd67af9c529bf1fab5a2893825af.
2016-12-01 19:15:41,159 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,159 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,173 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/ENCOUNTER/436b641f523cc9e1add60225998a4a4b/recovered.edits/0000000000000000171
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,187 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,190 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of PROCEDURE successfully.
2016-12-01 19:15:41,191 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:41,191 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of PROCEDURE successfully.
2016-12-01 19:15:41,191 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:41,191 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of PROCEDURE successfully.
2016-12-01 19:15:41,196 INFO  [StoreOpener-2ca0f5757a70a75a2dfac9e2b8e8de14-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,196 INFO  [StoreOpener-2ca0f5757a70a75a2dfac9e2b8e8de14-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,212 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/PROCEDURE/2ca0f5757a70a75a2dfac9e2b8e8de14/recovered.edits/0000000000000000104
2016-12-01 19:15:41,237 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 045c57a37dbbdf8427895346f2ea2e0c; next sequenceid=8
2016-12-01 19:15:41,241 INFO  [PostOpenDeployTasks:045c57a37dbbdf8427895346f2ea2e0c] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_SEQUENCES,,1480593685685.045c57a37dbbdf8427895346f2ea2e0c.
2016-12-01 19:15:41,245 INFO  [PostOpenDeployTasks:045c57a37dbbdf8427895346f2ea2e0c] hbase.MetaTableAccessor: Updated row splice:SPLICE_SEQUENCES,,1480593685685.045c57a37dbbdf8427895346f2ea2e0c. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,275 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 436b641f523cc9e1add60225998a4a4b; next sequenceid=172
2016-12-01 19:15:41,275 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:41,299 INFO  [PostOpenDeployTasks:436b641f523cc9e1add60225998a4a4b] regionserver.HRegionServer: Post open deploy tasks for ENCOUNTER,3,1479977632429.436b641f523cc9e1add60225998a4a4b.
2016-12-01 19:15:41,299 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,300 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,300 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined 2ca0f5757a70a75a2dfac9e2b8e8de14; next sequenceid=105
2016-12-01 19:15:41,301 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:41,304 INFO  [PostOpenDeployTasks:436b641f523cc9e1add60225998a4a4b] hbase.MetaTableAccessor: Updated row ENCOUNTER,3,1479977632429.436b641f523cc9e1add60225998a4a4b. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,316 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,317 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,320 INFO  [StoreOpener-720f27c20e300e2c5bc7b5d3b8eddcbf-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,320 INFO  [StoreOpener-720f27c20e300e2c5bc7b5d3b8eddcbf-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,322 INFO  [PostOpenDeployTasks:2ca0f5757a70a75a2dfac9e2b8e8de14] regionserver.HRegionServer: Post open deploy tasks for PROCEDURE,3,1479977635472.2ca0f5757a70a75a2dfac9e2b8e8de14.
2016-12-01 19:15:41,323 INFO  [StoreOpener-720f27c20e300e2c5bc7b5d3b8eddcbf-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,323 INFO  [StoreOpener-720f27c20e300e2c5bc7b5d3b8eddcbf-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,327 INFO  [PostOpenDeployTasks:2ca0f5757a70a75a2dfac9e2b8e8de14] hbase.MetaTableAccessor: Updated row PROCEDURE,3,1479977635472.2ca0f5757a70a75a2dfac9e2b8e8de14. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,334 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/720f27c20e300e2c5bc7b5d3b8eddcbf/recovered.edits/0000000000000000009
2016-12-01 19:15:41,381 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,381 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,400 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:15:41,400 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:15:41,401 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,401 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,401 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,402 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,402 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,402 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,402 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,405 INFO  [StoreOpener-04d5ffef435a1e4041af8895340de6ae-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,405 INFO  [StoreOpener-04d5ffef435a1e4041af8895340de6ae-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,408 INFO  [StoreOpener-04d5ffef435a1e4041af8895340de6ae-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,408 INFO  [StoreOpener-04d5ffef435a1e4041af8895340de6ae-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,411 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 720f27c20e300e2c5bc7b5d3b8eddcbf; next sequenceid=10
2016-12-01 19:15:41,415 INFO  [PostOpenDeployTasks:720f27c20e300e2c5bc7b5d3b8eddcbf] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf.
2016-12-01 19:15:41,416 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/04d5ffef435a1e4041af8895340de6ae/recovered.edits/0000000000000000007
2016-12-01 19:15:41,419 INFO  [PostOpenDeployTasks:720f27c20e300e2c5bc7b5d3b8eddcbf] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:15:41,427 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:15:41,430 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.hbase.index.Indexer from HTD of DD_ENTITY_DEF successfully.
2016-12-01 19:15:41,430 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of DD_ENTITY_DEF successfully.
2016-12-01 19:15:41,430 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of DD_ENTITY_DEF successfully.
2016-12-01 19:15:41,431 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of DD_ENTITY_DEF successfully.
2016-12-01 19:15:41,431 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of DD_ENTITY_DEF successfully.
2016-12-01 19:15:41,436 INFO  [StoreOpener-fa9ccd67af9c529bf1fab5a2893825af-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=34, currentSize=1577520, freeSize=1286912720, maxSize=1288490240, heapSize=1577520, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:15:41,436 INFO  [StoreOpener-fa9ccd67af9c529bf1fab5a2893825af-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:15:41,453 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Replaying edits from hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/default/DD_ENTITY_DEF/fa9ccd67af9c529bf1fab5a2893825af/recovered.edits/0000000000000000434
2016-12-01 19:15:41,497 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Onlined 04d5ffef435a1e4041af8895340de6ae; next sequenceid=8
2016-12-01 19:15:41,504 INFO  [PostOpenDeployTasks:04d5ffef435a1e4041af8895340de6ae] regionserver.HRegionServer: Post open deploy tasks for splice:SPLICE_TXN,\x0C,1480593676447.04d5ffef435a1e4041af8895340de6ae.
2016-12-01 19:15:41,511 INFO  [PostOpenDeployTasks:04d5ffef435a1e4041af8895340de6ae] hbase.MetaTableAccessor: Updated row splice:SPLICE_TXN,\x0C,1480593676447.04d5ffef435a1e4041af8895340de6ae. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,535 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Onlined fa9ccd67af9c529bf1fab5a2893825af; next sequenceid=435
2016-12-01 19:15:41,535 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-1] index.Indexer: Found some outstanding index updates that didn't succeed during WAL replay - attempting to replay now.
2016-12-01 19:15:41,555 INFO  [PostOpenDeployTasks:fa9ccd67af9c529bf1fab5a2893825af] regionserver.HRegionServer: Post open deploy tasks for DD_ENTITY_DEF,,1479977408920.fa9ccd67af9c529bf1fab5a2893825af.
2016-12-01 19:15:41,562 INFO  [PostOpenDeployTasks:fa9ccd67af9c529bf1fab5a2893825af] hbase.MetaTableAccessor: Updated row DD_ENTITY_DEF,,1479977408920.fa9ccd67af9c529bf1fab5a2893825af. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,720 INFO  [SplitLogWorker-hscale-dev1-dn1:16020] coordination.ZkSplitLogWorkerCoordination: worker hscale-dev1-dn1,16020,1480599802236 acquired task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null3.1480599883265
2016-12-01 19:15:41,743 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Splitting wal: hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null3.1480599883265, length=91
2016-12-01 19:15:41,743 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: DistributedLogReplay = false
2016-12-01 19:15:41,760 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: Recovering lease on dfs file hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null3.1480599883265
2016-12-01 19:15:41,761 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] util.FSHDFSUtils: recoverLease=true, attempt=0 on file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null3.1480599883265 after 1ms
2016-12-01 19:15:41,810 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Split writers finished
2016-12-01 19:15:41,810 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] wal.WALSplitter: Processed 0 edits across 0 regions; edits skipped=0; log file=hdfs://hscale-dev1-nn:8020/apps/hbase/data/WALs/hscale-dev1-dn4,16020,1480599845544-splitting/hscale-dev1-dn4%2C16020%2C1480599845544.null3.1480599883265, length=91, corrupted=false, progress failed=false
2016-12-01 19:15:41,829 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] coordination.ZkSplitLogWorkerCoordination: successfully transitioned task /hbase-secure/splitWAL/WALs%2Fhscale-dev1-dn4%2C16020%2C1480599845544-splitting%2Fhscale-dev1-dn4%252C16020%252C1480599845544.null3.1480599883265 to final state DONE hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:15:41,830 INFO  [RS_LOG_REPLAY_OPS-hscale-dev1-dn1:16020-0] handler.WALSplitterHandler: worker hscale-dev1-dn1,16020,1480599802236 done with task org.apache.hadoop.hbase.coordination.ZkSplitLogWorkerCoordination$ZkSplitTaskDetails@58d2d952 in 110ms
2016-12-01 19:15:42,167 INFO  [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl: Atomically moving hscale-dev1-dn4,16020,1480599845544's wals to my queue
2016-12-01 19:16:04,418 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=240, waitTime=1
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=240, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=240, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:16:04,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:04,928 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:05,435 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:06,444 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:06,446 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599964417, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599964417, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:04 IST 2016, RpcRetryingCaller{globalStartTime=1480599964417, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:05 IST 2016, RpcRetryingCaller{globalStartTime=1480599964417, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:06 IST 2016, RpcRetryingCaller{globalStartTime=1480599964417, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:09,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:09,624 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:09,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:10,432 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:11,443 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:11,447 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599969418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599969418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:09 IST 2016, RpcRetryingCaller{globalStartTime=1480599969418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:10 IST 2016, RpcRetryingCaller{globalStartTime=1480599969418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:11 IST 2016, RpcRetryingCaller{globalStartTime=1480599969418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:14,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:14,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:14,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:15,436 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:16,444 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:16,446 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599974418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599974418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:14 IST 2016, RpcRetryingCaller{globalStartTime=1480599974418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:15 IST 2016, RpcRetryingCaller{globalStartTime=1480599974418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:16 IST 2016, RpcRetryingCaller{globalStartTime=1480599974418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:19,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:19,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:19,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:20,437 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:21,450 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:21,453 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599979418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599979418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:19 IST 2016, RpcRetryingCaller{globalStartTime=1480599979418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:20 IST 2016, RpcRetryingCaller{globalStartTime=1480599979418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:21 IST 2016, RpcRetryingCaller{globalStartTime=1480599979418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:22,246 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:22,247 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:22,250 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:22,251 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:22,253 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:16:24,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:24,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:24,931 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:25,438 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:26,450 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:26,453 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599984418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599984418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:24 IST 2016, RpcRetryingCaller{globalStartTime=1480599984418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:25 IST 2016, RpcRetryingCaller{globalStartTime=1480599984418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:26 IST 2016, RpcRetryingCaller{globalStartTime=1480599984418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:29,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:29,625 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:29,930 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:30,433 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:31,027 INFO  [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl: Atomically moving hscale-dev1-dn2,16020,1480599823976's wals to my queue
2016-12-01 19:16:31,440 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:31,444 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599989418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599989418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:29 IST 2016, RpcRetryingCaller{globalStartTime=1480599989418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:30 IST 2016, RpcRetryingCaller{globalStartTime=1480599989418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:31 IST 2016, RpcRetryingCaller{globalStartTime=1480599989418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:34,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:34,624 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:34,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:35,436 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:36,446 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:36,449 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599994418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599994418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:34 IST 2016, RpcRetryingCaller{globalStartTime=1480599994418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:35 IST 2016, RpcRetryingCaller{globalStartTime=1480599994418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:36 IST 2016, RpcRetryingCaller{globalStartTime=1480599994418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:39,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:39,624 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:39,929 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:40,433 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:41,439 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:41,442 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599999418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599999418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:39 IST 2016, RpcRetryingCaller{globalStartTime=1480599999418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:40 IST 2016, RpcRetryingCaller{globalStartTime=1480599999418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:41 IST 2016, RpcRetryingCaller{globalStartTime=1480599999418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:44,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:44,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:44,928 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:45,434 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:46,442 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:46,445 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:44 IST 2016, RpcRetryingCaller{globalStartTime=1480600004418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:44 IST 2016, RpcRetryingCaller{globalStartTime=1480600004418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:44 IST 2016, RpcRetryingCaller{globalStartTime=1480600004418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:45 IST 2016, RpcRetryingCaller{globalStartTime=1480600004418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:46 IST 2016, RpcRetryingCaller{globalStartTime=1480600004418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:49,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:49,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:49,927 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:50,431 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:51,437 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:51,440 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:49 IST 2016, RpcRetryingCaller{globalStartTime=1480600009418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:49 IST 2016, RpcRetryingCaller{globalStartTime=1480600009418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:49 IST 2016, RpcRetryingCaller{globalStartTime=1480600009418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:50 IST 2016, RpcRetryingCaller{globalStartTime=1480600009418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:51 IST 2016, RpcRetryingCaller{globalStartTime=1480600009418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:54,420 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:54,626 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:54,931 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:55,439 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:56,447 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:56,450 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:54 IST 2016, RpcRetryingCaller{globalStartTime=1480600014418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:54 IST 2016, RpcRetryingCaller{globalStartTime=1480600014418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:54 IST 2016, RpcRetryingCaller{globalStartTime=1480600014418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:55 IST 2016, RpcRetryingCaller{globalStartTime=1480600014418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:56 IST 2016, RpcRetryingCaller{globalStartTime=1480600014418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:16:59,419 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:59,623 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:16:59,928 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:00,434 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.FailedServerException: This server is in the failed servers list: hscale-dev1-nn/10.60.70.10:16000
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:701)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:01,440 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: java.net.ConnectException: Connection refused
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupConnection(RpcClientImpl.java:410)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:716)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1200)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:01,443 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:16:59 IST 2016, RpcRetryingCaller{globalStartTime=1480600019418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:59 IST 2016, RpcRetryingCaller{globalStartTime=1480600019418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:16:59 IST 2016, RpcRetryingCaller{globalStartTime=1480600019418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:17:00 IST 2016, RpcRetryingCaller{globalStartTime=1480600019418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:17:01 IST 2016, RpcRetryingCaller{globalStartTime=1480600019418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1540)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceStubMaker.makeStub(ConnectionManager.java:1560)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1711)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    ... 14 more
Caused by: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
    at org.apache.hadoop.hbase.zookeeper.MasterAddressTracker.getMasterAddress(MasterAddressTracker.java:154)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStubNoRetries(ConnectionManager.java:1491)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$StubMaker.makeStub(ConnectionManager.java:1531)
    ... 18 more
2016-12-01 19:17:04,712 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:04,915 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=301, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=301, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=301, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:17:05,230 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:05,436 INFO  [ReplicationExecutor-0] replication.ReplicationQueuesZKImpl: Atomically moving hscale-dev1-dn3,16020,1480599826952's wals to my queue
2016-12-01 19:17:05,745 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not running yet
    at org.apache.hadoop.hbase.master.HMaster.checkServiceStarted(HMaster.java:2317)
    at org.apache.hadoop.hbase.master.MasterRpcServices.isMasterRunning(MasterRpcServices.java:924)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55373)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1226)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
2016-12-01 19:17:06,774 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:17:04 IST 2016, RpcRetryingCaller{globalStartTime=1480600024418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:17:04 IST 2016, RpcRetryingCaller{globalStartTime=1480600024418, pause=100, retries=5}, org.apache.hadoop.hbase.MasterNotRunningException: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
Thu Dec 01 19:17:05 IST 2016, RpcRetryingCaller{globalStartTime=1480600024418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=303, waitTime=0
Thu Dec 01 19:17:05 IST 2016, RpcRetryingCaller{globalStartTime=1480600024418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=305, waitTime=0
Thu Dec 01 19:17:06 IST 2016, RpcRetryingCaller{globalStartTime=1480600024418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=307, waitTime=1

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=307, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=307, waitTime=1
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:17:11,485 ERROR [hbase-region-load-updater-0] hbase.HBaseRegionLoads: Unable to fetch region load info
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=5, exceptions:
Thu Dec 01 19:17:09 IST 2016, RpcRetryingCaller{globalStartTime=1480600029418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:17:09 IST 2016, RpcRetryingCaller{globalStartTime=1480600029418, pause=100, retries=5}, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.PleaseHoldException): org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
    at org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2324)
    at org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:770)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55371)
    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2114)
    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
    at java.lang.Thread.run(Thread.java:745)

Thu Dec 01 19:17:09 IST 2016, RpcRetryingCaller{globalStartTime=1480600029418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=313, waitTime=0
Thu Dec 01 19:17:10 IST 2016, RpcRetryingCaller{globalStartTime=1480600029418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=315, waitTime=0
Thu Dec 01 19:17:11 IST 2016, RpcRetryingCaller{globalStartTime=1480600029418, pause=100, retries=5}, org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=317, waitTime=0

    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:147)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=317, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.getClusterStatus(MasterProtos.java:58140)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$4.getClusterStatus(ConnectionManager.java:2036)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2769)
    at org.apache.hadoop.hbase.client.HBaseAdmin$33.call(HBaseAdmin.java:2765)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126)
    ... 14 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=317, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:17:19,418 WARN  [hbase-region-load-updater-0] client.ConnectionManager$HConnectionImplementation: Checking master connection
com.google.protobuf.ServiceException: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=326, waitTime=0
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:223)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287)
    at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$BlockingStub.isMasterRunning(MasterProtos.java:58152)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation$MasterServiceState.isMasterRunning(ConnectionManager.java:1444)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.isKeepAliveMasterConnectedAndRunning(ConnectionManager.java:2099)
    at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.getKeepAliveMasterService(ConnectionManager.java:1708)
    at org.apache.hadoop.hbase.client.MasterCallable.prepare(MasterCallable.java:38)
    at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:124)
    at org.apache.hadoop.hbase.client.HBaseAdmin.executeCallable(HBaseAdmin.java:4083)
    at org.apache.hadoop.hbase.client.HBaseAdmin.getClusterStatus(HBaseAdmin.java:2765)
    at com.splicemachine.access.hbase.H10PartitionAdmin.allServers(H10PartitionAdmin.java:120)
    at com.splicemachine.hbase.HBaseRegionLoads.fetchRegionLoads(HBaseRegionLoads.java:164)
    at com.splicemachine.hbase.HBaseRegionLoads.access$000(HBaseRegionLoads.java:60)
    at com.splicemachine.hbase.HBaseRegionLoads$1.run(HBaseRegionLoads.java:81)
    at com.splicemachine.concurrent.LoggingScheduledThreadPoolExecutor$LoggingRunnable.run(LoggingScheduledThreadPoolExecutor.java:75)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to hscale-dev1-nn/10.60.70.10:16000 failed on local exception: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=326, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.wrapException(RpcClientImpl.java:1259)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1230)
    at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213)
    ... 21 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection to hscale-dev1-nn/10.60.70.10:16000 is closing. Call id=326, waitTime=0
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.cleanupCalls(RpcClientImpl.java:1047)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.close(RpcClientImpl.java:846)
    at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.run(RpcClientImpl.java:574)
2016-12-01 19:17:22,230 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:11,234 INFO  [PriorityRpcServer.handler=14,queue=0,port=16020] regionserver.RSRpcServices: Close eb5cf006e72f3e59c033f8023a559abb, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,250 INFO  [StoreCloserThread-PROCEDURE,2,1479977635472.eb5cf006e72f3e59c033f8023a559abb.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,286 INFO  [PriorityRpcServer.handler=18,queue=0,port=16020] regionserver.RSRpcServices: Close 99a13a250748cddebf50a0a937a3144a, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,291 INFO  [StoreCloserThread-PROCEDURE,,1479977635472.99a13a250748cddebf50a0a937a3144a.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,299 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,299 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,300 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,300 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,300 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,300 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,302 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed PROCEDURE,2,1479977635472.eb5cf006e72f3e59c033f8023a559abb.
2016-12-01 19:18:11,302 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: eb5cf006e72f3e59c033f8023a559abb to hscale-dev1-dn4,16020,1480599948750 as of 129
2016-12-01 19:18:11,315 INFO  [PriorityRpcServer.handler=0,queue=0,port=16020] regionserver.RSRpcServices: Close 17022d5d42890169454bf30a0203da51, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,327 INFO  [StoreCloserThread-ENCOUNTER,1,1479977632429.17022d5d42890169454bf30a0203da51.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,339 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,340 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,342 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,342 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,343 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,343 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,344 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Closed PROCEDURE,,1479977635472.99a13a250748cddebf50a0a937a3144a.
2016-12-01 19:18:11,344 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegionServer: Adding moved region record: 99a13a250748cddebf50a0a937a3144a to hscale-dev1-dn2,16020,1480599958903 as of 106
2016-12-01 19:18:11,351 INFO  [PriorityRpcServer.handler=16,queue=0,port=16020] regionserver.RSRpcServices: Close 436b641f523cc9e1add60225998a4a4b, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,367 INFO  [StoreCloserThread-ENCOUNTER,3,1479977632429.436b641f523cc9e1add60225998a4a4b.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,376 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,377 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed ENCOUNTER,1,1479977632429.17022d5d42890169454bf30a0203da51.
2016-12-01 19:18:11,377 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: 17022d5d42890169454bf30a0203da51 to hscale-dev1-dn2,16020,1480599958903 as of 179
2016-12-01 19:18:11,393 INFO  [PriorityRpcServer.handler=4,queue=0,port=16020] regionserver.RSRpcServices: Close 1e04655659c5902dd127923cf1a58e61, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,408 INFO  [StoreCloserThread-PATIENT,3,1479977629367.1e04655659c5902dd127923cf1a58e61.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed ENCOUNTER,3,1479977632429.436b641f523cc9e1add60225998a4a4b.
2016-12-01 19:18:11,414 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: 436b641f523cc9e1add60225998a4a4b to hscale-dev1-dn4,16020,1480599948750 as of 172
2016-12-01 19:18:11,444 INFO  [PriorityRpcServer.handler=6,queue=0,port=16020] regionserver.RSRpcServices: Close 3862bdfc3021330623e5302d2207998e, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Closed PATIENT,3,1479977629367.1e04655659c5902dd127923cf1a58e61.
2016-12-01 19:18:11,454 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegionServer: Adding moved region record: 1e04655659c5902dd127923cf1a58e61 to hscale-dev1-dn2,16020,1480599958903 as of 146
2016-12-01 19:18:11,469 INFO  [StoreCloserThread-PATIENT,2,1479977629367.3862bdfc3021330623e5302d2207998e.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,486 INFO  [PriorityRpcServer.handler=9,queue=1,port=16020] regionserver.RSRpcServices: Close e5c0350ed1099979ad85330cdeded026, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,499 INFO  [StoreCloserThread-FMD,2,1479977442279.e5c0350ed1099979ad85330cdeded026.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed PATIENT,2,1479977629367.3862bdfc3021330623e5302d2207998e.
2016-12-01 19:18:11,517 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: 3862bdfc3021330623e5302d2207998e to hscale-dev1-dn4,16020,1480599948750 as of 143
2016-12-01 19:18:11,534 INFO  [PriorityRpcServer.handler=2,queue=0,port=16020] regionserver.RSRpcServices: Close 3de8ae6766ac73a2f1418a9c4859cd10, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,539 INFO  [StoreCloserThread-FMD,3,1479977442279.3de8ae6766ac73a2f1418a9c4859cd10.-1] regionserver.HStore: Closed CF1
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed FMD,2,1479977442279.e5c0350ed1099979ad85330cdeded026.
2016-12-01 19:18:11,556 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: e5c0350ed1099979ad85330cdeded026 to hscale-dev1-dn2,16020,1480599958903 as of 209
2016-12-01 19:18:11,572 INFO  [PriorityRpcServer.handler=8,queue=0,port=16020] regionserver.RSRpcServices: Close e7a359cdd8fa8f6bf55164aef866ec7b, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,581 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,581 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,606 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] write.ParallelWriterIndexCommitter: Shutting down ParallelWriterIndexCommitter because Indexer is being stopped
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] recovery.TrackingParallelWriterIndexCommitter: Shutting down TrackingParallelWriterIndexCommitter
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] parallel.BaseTaskRunner: Shutting down task runner because Indexer is being stopped
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Closed FMD,3,1479977442279.3de8ae6766ac73a2f1418a9c4859cd10.
2016-12-01 19:18:11,607 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegionServer: Adding moved region record: 3de8ae6766ac73a2f1418a9c4859cd10 to hscale-dev1-dn4,16020,1480599948750 as of 144
2016-12-01 19:18:11,632 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,632 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed splice:SPLICE_TXN,\x03,1480593676447.e7a359cdd8fa8f6bf55164aef866ec7b.
2016-12-01 19:18:11,632 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: e7a359cdd8fa8f6bf55164aef866ec7b to hscale-dev1-dn4,16020,1480599948750 as of 6
2016-12-01 19:18:11,637 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Close b9b2c4cfa770388f4d26e83953c2e495, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,658 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,659 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,676 INFO  [PriorityRpcServer.handler=10,queue=0,port=16020] regionserver.RSRpcServices: Close 6a3106089ce462b563f88da133dba689, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,690 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,690 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,714 INFO  [PriorityRpcServer.handler=12,queue=0,port=16020] regionserver.RSRpcServices: Close 4ae4c5ba5fb97295e3d04f32627b110f, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,726 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,726 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed splice:SPLICE_TXN,\x04,1480593676447.b9b2c4cfa770388f4d26e83953c2e495.
2016-12-01 19:18:11,726 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: b9b2c4cfa770388f4d26e83953c2e495 to hscale-dev1-dn2,16020,1480599958903 as of 6
2016-12-01 19:18:11,727 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,727 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,754 INFO  [PriorityRpcServer.handler=17,queue=1,port=16020] regionserver.RSRpcServices: Close e408a4ef03608ae6738cde8286584311, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,759 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,759 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,765 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,765 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Closed splice:SPLICE_TXN,\x05,1480593676447.6a3106089ce462b563f88da133dba689.
2016-12-01 19:18:11,765 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegionServer: Adding moved region record: 6a3106089ce462b563f88da133dba689 to hscale-dev1-dn4,16020,1480599948750 as of 6
2016-12-01 19:18:11,786 INFO  [PriorityRpcServer.handler=14,queue=0,port=16020] regionserver.RSRpcServices: Close d8e5258e5ee4f6c1a61d91bd224d0bfa, moving to hscale-dev1-dn4,16020,1480599948750
2016-12-01 19:18:11,803 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,803 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed splice:SPLICE_TXN,\x09,1480593676447.4ae4c5ba5fb97295e3d04f32627b110f.
2016-12-01 19:18:11,803 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: 4ae4c5ba5fb97295e3d04f32627b110f to hscale-dev1-dn4,16020,1480599948750 as of 8
2016-12-01 19:18:11,829 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,829 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed splice:SPLICE_TXN,\x0A,1480593676447.e408a4ef03608ae6738cde8286584311.
2016-12-01 19:18:11,829 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: e408a4ef03608ae6738cde8286584311 to hscale-dev1-dn4,16020,1480599948750 as of 6
2016-12-01 19:18:11,833 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Started memstore flush for splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa., current region memstore size 1.12 KB, and 2/2 column families' memstores are being flushed.
2016-12-01 19:18:11,846 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] regionserver.RSRpcServices: Close 443b9beecbf6a1a3264edb20b2230a52, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,885 INFO  [PriorityRpcServer.handler=13,queue=1,port=16020] regionserver.RSRpcServices: Close af148aa23be6b8294a12150e68bdb64f, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,885 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,885 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,889 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,889 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f.-1] regionserver.HStore: Closed V
2016-12-01 19:18:11,918 INFO  [PriorityRpcServer.handler=18,queue=0,port=16020] regionserver.RSRpcServices: Close a89555c052f2e4a92d6c1feb6047dbb4, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,938 INFO  [PriorityRpcServer.handler=0,queue=0,port=16020] regionserver.RSRpcServices: Close 720f27c20e300e2c5bc7b5d3b8eddcbf, moving to hscale-dev1-dn2,16020,1480599958903
2016-12-01 19:18:11,948 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=18, memsize=1.1 K, hasBloomFilter=true, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/d8e5258e5ee4f6c1a61d91bd224d0bfa/.tmp/3a2d4432aa534af7b36a692c8b45bf8e
2016-12-01 19:18:11,960 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,960 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed splice:SPLICE_TXN,\x07,1480593676447.443b9beecbf6a1a3264edb20b2230a52.
2016-12-01 19:18:11,960 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: 443b9beecbf6a1a3264edb20b2230a52 to hscale-dev1-dn2,16020,1480599958903 as of 6
2016-12-01 19:18:11,973 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:11,974 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed splice:SPLICE_TXN,\x08,1480593676447.af148aa23be6b8294a12150e68bdb64f.
2016-12-01 19:18:11,974 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: af148aa23be6b8294a12150e68bdb64f to hscale-dev1-dn2,16020,1480599958903 as of 6
2016-12-01 19:18:11,978 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4.-1] regionserver.HStore: Closed P
2016-12-01 19:18:11,978 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4.-1] regionserver.HStore: Closed V
2016-12-01 19:18:12,002 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf.-1] regionserver.HStore: Closed P
2016-12-01 19:18:12,002 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf.-1] regionserver.HStore: Closed V
2016-12-01 19:18:12,024 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/splice/SPLICE_TXN/d8e5258e5ee4f6c1a61d91bd224d0bfa/V/3a2d4432aa534af7b36a692c8b45bf8e, entries=7, sequenceid=18, filesize=4.9 K
2016-12-01 19:18:12,027 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Finished memstore flush of ~1.12 KB/1144, currentsize=0 B/0 for region splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa. in 194ms, sequenceid=18, compaction requested=false
2016-12-01 19:18:12,028 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa.-1] regionserver.HStore: Closed P
2016-12-01 19:18:12,041 INFO  [StoreCloserThread-splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa.-1] regionserver.HStore: Closed V
2016-12-01 19:18:12,080 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:12,080 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed splice:SPLICE_TXN,\x0F,1480593676447.a89555c052f2e4a92d6c1feb6047dbb4.
2016-12-01 19:18:12,080 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegionServer: Adding moved region record: a89555c052f2e4a92d6c1feb6047dbb4 to hscale-dev1-dn2,16020,1480599958903 as of 6
2016-12-01 19:18:12,093 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:12,094 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegion: Closed splice:SPLICE_TXN,\x06,1480593676447.720f27c20e300e2c5bc7b5d3b8eddcbf.
2016-12-01 19:18:12,094 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-0] regionserver.HRegionServer: Adding moved region record: 720f27c20e300e2c5bc7b5d3b8eddcbf to hscale-dev1-dn2,16020,1480599958903 as of 10
2016-12-01 19:18:12,106 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:18:12,106 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegion: Closed splice:SPLICE_TXN,\x01,1480593676447.d8e5258e5ee4f6c1a61d91bd224d0bfa.
2016-12-01 19:18:12,106 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-1] regionserver.HRegionServer: Adding moved region record: d8e5258e5ee4f6c1a61d91bd224d0bfa to hscale-dev1-dn4,16020,1480599948750 as of 18
2016-12-01 19:18:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:22,240 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:22,244 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:18:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.51 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=35, accesses=453, hits=418, hitRatio=92.27%, , cachingAccesses=448, cachingHits=413, cachingHitsRatio=92.19%, evictions=29, evicted=3, evictedPerRun=0.1034482792019844
2016-12-01 19:19:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:19:56,186 INFO  [hscale-dev1-dn1,16020,1480599802236_ChoreService_1] regionserver.HRegionServer: hscale-dev1-dn1,16020,1480599802236-MemstoreFlusherChore requesting flush for region hbase:meta,,1.1588230740 after a delay of 20477
2016-12-01 19:20:06,186 INFO  [hscale-dev1-dn1,16020,1480599802236_ChoreService_1] regionserver.HRegionServer: hscale-dev1-dn1,16020,1480599802236-MemstoreFlusherChore requesting flush for region hbase:meta,,1.1588230740 after a delay of 13084
2016-12-01 19:20:16,187 INFO  [hscale-dev1-dn1,16020,1480599802236_ChoreService_1] regionserver.HRegionServer: hscale-dev1-dn1,16020,1480599802236-MemstoreFlusherChore requesting flush for region hbase:meta,,1.1588230740 after a delay of 7515
2016-12-01 19:20:16,667 INFO  [MemStoreFlusher.0] regionserver.HRegion: Started memstore flush for hbase:meta,,1.1588230740, current region memstore size 52.23 KB, and 1/1 column families' memstores are being flushed.
2016-12-01 19:20:16,763 INFO  [MemStoreFlusher.0] regionserver.DefaultStoreFlusher: Flushed, sequenceid=999, memsize=52.2 K, hasBloomFilter=false, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/.tmp/412ff06bb37e471b8f9aedff58bf72c6
2016-12-01 19:20:16,803 INFO  [MemStoreFlusher.0] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/info/412ff06bb37e471b8f9aedff58bf72c6, entries=220, sequenceid=999, filesize=30.1 K
2016-12-01 19:20:16,808 INFO  [MemStoreFlusher.0] regionserver.HRegion: Finished memstore flush of ~52.23 KB/53480, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 141ms, sequenceid=999, compaction requested=false
2016-12-01 19:20:22,230 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:21:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:21:22,238 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:21:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:21:22,245 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:21:22,247 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:22:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:22:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:23:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:23:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:23:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:23:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.53 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=39, accesses=623, hits=584, hitRatio=93.74%, , cachingAccesses=618, cachingHits=579, cachingHitsRatio=93.69%, evictions=59, evicted=3, evictedPerRun=0.050847455859184265
2016-12-01 19:24:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:24:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:24:22,238 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:24:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:25:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:25:53,469 INFO  [PriorityRpcServer.handler=9,queue=1,port=16020] regionserver.RSRpcServices: Close 562458d14118a6f198dad32d8a0d0b12, moving to null
2016-12-01 19:25:53,474 INFO  [StoreCloserThread-SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.-1] regionserver.HStore: Closed 0
2016-12-01 19:25:53,524 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] coprocessor.TxnLifecycleEndpoint: Shutting down TxnLifecycleEndpoint
2016-12-01 19:25:53,525 INFO  [RS_CLOSE_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Closed SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.
2016-12-01 19:25:53,597 INFO  [PriorityRpcServer.handler=7,queue=1,port=16020] regionserver.RSRpcServices: Open SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.
2016-12-01 19:25:53,624 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.AccessController was loaded successfully with priority (536870911).
2016-12-01 19:25:53,624 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.token.TokenProvider was loaded successfully with priority (536870912).
2016-12-01 19:25:53,640 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor org.apache.hadoop.hbase.security.access.SecureBulkLoadEndpoint was loaded successfully with priority (536870913).
2016-12-01 19:25:53,640 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.MemstoreAwareObserver was loaded successfully with priority (536870914).
2016-12-01 19:25:53,640 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.derby.hbase.SpliceIndexEndpoint was loaded successfully with priority (536870915).
2016-12-01 19:25:53,640 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.RegionSizeEndpoint was loaded successfully with priority (536870916).
2016-12-01 19:25:53,640 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.TxnLifecycleEndpoint was loaded successfully with priority (536870917).
2016-12-01 19:25:53,641 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.si.data.hbase.coprocessor.SIObserver was loaded successfully with priority (536870918).
2016-12-01 19:25:53,641 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] coprocessor.CoprocessorHost: System coprocessor com.splicemachine.hbase.BackupEndpointObserver was loaded successfully with priority (536870919).
2016-12-01 19:25:53,645 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.hadoop.hbase.coprocessor.MultiRowMutationEndpoint from HTD of SYSTEM.STATS successfully.
2016-12-01 19:25:53,645 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:25:53,645 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ServerCachingEndpointImpl from HTD of SYSTEM.STATS successfully.
2016-12-01 19:25:53,645 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.ScanRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:25:53,645 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.RegionCoprocessorHost: Loaded coprocessor org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver from HTD of SYSTEM.STATS successfully.
2016-12-01 19:25:53,651 INFO  [StoreOpener-562458d14118a6f198dad32d8a0d0b12-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=39, currentSize=1606424, freeSize=1286883816, maxSize=1288490240, heapSize=1606424, minSize=1224065664, minFactor=0.95, multiSize=612032832, multiFactor=0.5, singleSize=306016416, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=true, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2016-12-01 19:25:53,651 INFO  [StoreOpener-562458d14118a6f198dad32d8a0d0b12-1] compactions.CompactionConfiguration: size [16777216, 260046848); files [5, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2016-12-01 19:25:53,717 INFO  [RS_OPEN_REGION-hscale-dev1-dn1:16020-2] regionserver.HRegion: Onlined 562458d14118a6f198dad32d8a0d0b12; next sequenceid=190
2016-12-01 19:25:53,723 INFO  [PostOpenDeployTasks:562458d14118a6f198dad32d8a0d0b12] regionserver.HRegionServer: Post open deploy tasks for SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12.
2016-12-01 19:25:53,738 INFO  [PostOpenDeployTasks:562458d14118a6f198dad32d8a0d0b12] hbase.MetaTableAccessor: Updated row SYSTEM.STATS,,1479977358242.562458d14118a6f198dad32d8a0d0b12. with server=hscale-dev1-dn1,16020,1480599802236
2016-12-01 19:25:53,937 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] zookeeper.RecoverableZooKeeper: Process identifier=hconnection-0x7d5d1078 connecting to ZooKeeper ensemble=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181
2016-12-01 19:25:53,937 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] zookeeper.ZooKeeper: Initiating client connection, connectString=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181 sessionTimeout=120000 watcher=hconnection-0x7d5d10780x0, quorum=hscale-dev1-dn1:2181,hscale-dev1-dn3:2181,hscale-dev1-dn2:2181,hscale-dev1-dn4:2181, baseZNode=/hbase-secure
2016-12-01 19:25:53,938 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020-SendThread(hscale-dev1-dn2:2181)] client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism.
2016-12-01 19:25:53,938 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Opening socket connection to server hscale-dev1-dn2/10.60.70.12:2181. Will attempt to SASL-authenticate using Login Context section 'Client'
2016-12-01 19:25:53,939 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Socket connection established to hscale-dev1-dn2/10.60.70.12:2181, initiating session
2016-12-01 19:25:53,958 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020-SendThread(hscale-dev1-dn2:2181)] zookeeper.ClientCnxn: Session establishment complete on server hscale-dev1-dn2/10.60.70.12:2181, sessionid = 0x258ba9a256f001a, negotiated timeout = 120000
2016-12-01 19:25:53,972 WARN  [PriorityRpcServer.handler=1,queue=1,port=16020] hbase.HBaseConfiguration: Config option "hbase.regionserver.lease.period" is deprecated. Instead, use "hbase.client.scanner.timeout.period"
2016-12-01 19:25:54,001 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] client.ConnectionManager$HConnectionImplementation: Closing zookeeper sessionid=0x258ba9a256f001a
2016-12-01 19:25:54,019 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020] zookeeper.ZooKeeper: Session: 0x258ba9a256f001a closed
2016-12-01 19:25:54,020 INFO  [PriorityRpcServer.handler=1,queue=1,port=16020-EventThread] zookeeper.ClientCnxn: EventThread shut down
2016-12-01 19:26:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:26:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:26:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:26:22,241 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:27:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:27:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:28:24,591 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.56 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=41, accesses=876, hits=835, hitRatio=95.32%, , cachingAccesses=871, cachingHits=830, cachingHitsRatio=95.29%, evictions=89, evicted=3, evictedPerRun=0.033707864582538605
2016-12-01 19:30:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:22,245 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:30:56,187 INFO  [hscale-dev1-dn1,16020,1480599802236_ChoreService_1] regionserver.HRegionServer: hscale-dev1-dn1,16020,1480599802236-MemstoreFlusherChore requesting flush for region hbase:meta,,1.1588230740 after a delay of 14110
2016-12-01 19:31:06,187 INFO  [hscale-dev1-dn1,16020,1480599802236_ChoreService_1] regionserver.HRegionServer: hscale-dev1-dn1,16020,1480599802236-MemstoreFlusherChore requesting flush for region hbase:meta,,1.1588230740 after a delay of 6547
2016-12-01 19:31:10,298 INFO  [MemStoreFlusher.1] regionserver.HRegion: Started memstore flush for hbase:meta,,1.1588230740, current region memstore size 720 B, and 1/1 column families' memstores are being flushed.
2016-12-01 19:31:10,385 INFO  [MemStoreFlusher.1] regionserver.DefaultStoreFlusher: Flushed, sequenceid=1004, memsize=720, hasBloomFilter=false, into tmp file hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/.tmp/0a3a9eb38b3d412c98ed4477841b82d7
2016-12-01 19:31:10,428 INFO  [MemStoreFlusher.1] regionserver.HStore: Added hdfs://hscale-dev1-nn:8020/apps/hbase/data/data/hbase/meta/1588230740/info/0a3a9eb38b3d412c98ed4477841b82d7, entries=3, sequenceid=1004, filesize=5.1 K
2016-12-01 19:31:10,432 INFO  [MemStoreFlusher.1] regionserver.HRegion: Finished memstore flush of ~720 B/720, currentsize=0 B/0 for region hbase:meta,,1.1588230740 in 134ms, sequenceid=1004, compaction requested=false
2016-12-01 19:32:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:32:22,244 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:33:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.56 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=42, accesses=1153, hits=1111, hitRatio=96.36%, , cachingAccesses=1148, cachingHits=1106, cachingHitsRatio=96.34%, evictions=119, evicted=3, evictedPerRun=0.02521008439362049
2016-12-01 19:34:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,238 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,241 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:34:22,244 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,245 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,245 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:36:22,246 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:38:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:38:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:38:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:38:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:38:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.56 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=42, accesses=1324, hits=1282, hitRatio=96.83%, , cachingAccesses=1319, cachingHits=1277, cachingHitsRatio=96.82%, evictions=149, evicted=3, evictedPerRun=0.020134227350354195
2016-12-01 19:39:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:39:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:39:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:40:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:40:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:40:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:40:22,238 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:41:22,231 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,241 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:42:22,242 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:43:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.56 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=42, accesses=1495, hits=1453, hitRatio=97.19%, , cachingAccesses=1490, cachingHits=1448, cachingHitsRatio=97.18%, evictions=179, evicted=3, evictedPerRun=0.016759777441620827
2016-12-01 19:44:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,240 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,241 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:44:22,243 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:46:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:46:22,236 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:46:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:46:22,239 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:47:22,232 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:47:22,234 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:47:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:48:22,233 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:48:22,235 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:48:22,237 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:48:22,238 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:48:24,592 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.56 MB, freeSize=1.20 GB, max=1.20 GB, blockCount=42, accesses=1666, hits=1624, hitRatio=97.48%, , cachingAccesses=1661, cachingHits=1619, cachingHitsRatio=97.47%, evictions=209, evicted=3, evictedPerRun=0.014354066923260689
2016-12-01 19:49:22,229 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics
2016-12-01 19:49:22,231 WARN  [timeline] timeline.HadoopTimelineMetricsSink: Unable to send metrics to collector by address:http://hscale-dev1-nn:6188/ws/v1/timeline/metrics