Raspbian Package Auto-Building

Build log for consul (0.6.3~dfsg-2) on armhf

consul0.6.3~dfsg-2armhf → 2016-03-28 06:05:22

sbuild (Debian sbuild) 0.66.0 (04 Oct 2015) on testbuildd.raspbian.org

+==============================================================================+
| consul 0.6.3~dfsg-2 (armhf)                                28 Mar 2016 05:38 |
+==============================================================================+

Package: consul
Version: 0.6.3~dfsg-2
Source Version: 0.6.3~dfsg-2
Distribution: stretch-staging
Machine Architecture: armhf
Host Architecture: armhf
Build Architecture: armhf

I: NOTICE: Log filtering will replace 'build/consul-96jG5Z/consul-0.6.3~dfsg' with '<<PKGBUILDDIR>>'
I: NOTICE: Log filtering will replace 'build/consul-96jG5Z' with '<<BUILDDIR>>'
I: NOTICE: Log filtering will replace 'var/lib/schroot/mount/stretch-staging-armhf-sbuild-898657f0-1da6-4a1e-b6b0-a19d6f41c511' with '<<CHROOT>>'

+------------------------------------------------------------------------------+
| Update chroot                                                                |
+------------------------------------------------------------------------------+

Get:1 http://172.17.0.1/private stretch-staging InRelease [11.3 kB]
Get:2 http://172.17.0.1/private stretch-staging/main Sources [8843 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf Packages [10.9 MB]
Fetched 19.8 MB in 24s (816 kB/s)
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Fetch source files                                                           |
+------------------------------------------------------------------------------+


Check APT
---------

Checking available source versions...

Download source files with APT
------------------------------

Reading package lists...
NOTICE: 'consul' packaging is maintained in the 'Git' version control system at:
git://anonscm.debian.org/pkg-go/packages/golang-github-hashicorp-consul.git
Please use:
git clone git://anonscm.debian.org/pkg-go/packages/golang-github-hashicorp-consul.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 588 kB of source archives.
Get:1 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-2 (dsc) [3230 B]
Get:2 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-2 (tar) [576 kB]
Get:3 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-2 (diff) [8840 B]
Fetched 588 kB in 0s (1552 kB/s)
Download complete and in download only mode

Check architectures
-------------------


Check dependencies
------------------

Merged Build-Depends: build-essential, fakeroot
Filtered Build-Depends: build-essential, fakeroot
dpkg-deb: building package 'sbuild-build-depends-core-dummy' in '/<<BUILDDIR>>/resolver-wbqBRw/apt_archive/sbuild-build-depends-core-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Sources [214 B]
Get:5 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ Packages [527 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install core build dependencies (apt-based resolver)                         |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  sbuild-build-depends-core-dummy
0 upgraded, 1 newly installed, 0 to remove and 21 not upgraded.
Need to get 0 B/768 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-wbqBRw/apt_archive ./ sbuild-build-depends-core-dummy 0.invalid.0 [768 B]
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package sbuild-build-depends-core-dummy.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 12648 files and directories currently installed.)
Preparing to unpack .../sbuild-build-depends-core-dummy.deb ...
Unpacking sbuild-build-depends-core-dummy (0.invalid.0) ...
Setting up sbuild-build-depends-core-dummy (0.invalid.0) ...
W: No sandbox user '_apt' on the system, can not drop privileges
Merged Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev | golang-github-miekg-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-armon-gomdb-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
Filtered Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-armon-gomdb-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
dpkg-deb: building package 'sbuild-build-depends-consul-dummy' in '/<<BUILDDIR>>/resolver-t23Lpf/apt_archive/sbuild-build-depends-consul-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Sources [497 B]
Get:5 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ Packages [813 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install consul build dependencies (apt-based resolver)                       |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  autotools-dev bsdmainutils ca-certificates debhelper dh-golang
  dh-strip-nondeterminism file gettext gettext-base golang-check.v1-dev
  golang-codegangsta-cli-dev golang-context-dev golang-dbus-dev golang-dns-dev
  golang-docker-dev golang-github-agtorre-gocolorize-dev
  golang-github-armon-circbuf-dev golang-github-armon-go-metrics-dev
  golang-github-armon-go-radix-dev golang-github-armon-gomdb-dev
  golang-github-bgentry-speakeasy-dev golang-github-bitly-go-simplejson-dev
  golang-github-bmizerany-assert-dev golang-github-boltdb-bolt-dev
  golang-github-bradfitz-gomemcache-dev golang-github-bugsnag-bugsnag-go-dev
  golang-github-bugsnag-panicwrap-dev golang-github-codegangsta-cli-dev
  golang-github-coreos-go-systemd-dev golang-github-datadog-datadog-go-dev
  golang-github-docker-docker-dev golang-github-docker-go-units-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-garyburd-redigo-dev
  golang-github-getsentry-raven-go-dev golang-github-go-fsnotify-fsnotify-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev golang-github-juju-loggo-dev
  golang-github-julienschmidt-httprouter-dev golang-github-kardianos-osext-dev
  golang-github-mattn-go-isatty-dev golang-github-mitchellh-cli-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-common-dev golang-github-revel-revel-dev
  golang-github-robfig-config-dev golang-github-robfig-pathtree-dev
  golang-github-ryanuber-columnize-dev golang-github-sirupsen-logrus-dev
  golang-github-stretchr-testify-dev golang-github-stvp-go-udp-testing-dev
  golang-github-tobi-airbrake-go-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vishvananda-netlink-dev
  golang-github-vishvananda-netns-dev golang-go golang-gocapability-dev
  golang-golang-x-crypto-dev golang-golang-x-net-dev golang-golang-x-sys-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-objx-dev golang-pretty-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-text-dev golang-x-text-dev groff-base intltool-debian
  libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 liblmdb-dev liblmdb0
  libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5 libsasl2-2 libsasl2-dev
  libsasl2-modules-db libssl1.0.2 libsystemd-dev libtimedate-perl
  libunistring0 libxml2 man-db openssl pkg-config po-debconf protobuf-compiler
Suggested packages:
  wamerican | wordlist whois vacation dh-make gettext-doc autopoint
  libasprintf-dev libgettextpo-dev bzr git golang-golang-x-tools mercurial
  subversion groff less www-browser libmail-box-perl
Recommended packages:
  curl | wget | lynx-cur libglib2.0-data shared-mime-info xdg-user-dirs
  lmdb-doc libsasl2-modules xml-core libmail-sendmail-perl
The following NEW packages will be installed:
  autotools-dev bsdmainutils ca-certificates debhelper dh-golang
  dh-strip-nondeterminism file gettext gettext-base golang-check.v1-dev
  golang-codegangsta-cli-dev golang-context-dev golang-dbus-dev golang-dns-dev
  golang-docker-dev golang-github-agtorre-gocolorize-dev
  golang-github-armon-circbuf-dev golang-github-armon-go-metrics-dev
  golang-github-armon-go-radix-dev golang-github-armon-gomdb-dev
  golang-github-bgentry-speakeasy-dev golang-github-bitly-go-simplejson-dev
  golang-github-bmizerany-assert-dev golang-github-boltdb-bolt-dev
  golang-github-bradfitz-gomemcache-dev golang-github-bugsnag-bugsnag-go-dev
  golang-github-bugsnag-panicwrap-dev golang-github-codegangsta-cli-dev
  golang-github-coreos-go-systemd-dev golang-github-datadog-datadog-go-dev
  golang-github-docker-docker-dev golang-github-docker-go-units-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-garyburd-redigo-dev
  golang-github-getsentry-raven-go-dev golang-github-go-fsnotify-fsnotify-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev golang-github-juju-loggo-dev
  golang-github-julienschmidt-httprouter-dev golang-github-kardianos-osext-dev
  golang-github-mattn-go-isatty-dev golang-github-mitchellh-cli-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-common-dev golang-github-revel-revel-dev
  golang-github-robfig-config-dev golang-github-robfig-pathtree-dev
  golang-github-ryanuber-columnize-dev golang-github-sirupsen-logrus-dev
  golang-github-stretchr-testify-dev golang-github-stvp-go-udp-testing-dev
  golang-github-tobi-airbrake-go-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vishvananda-netlink-dev
  golang-github-vishvananda-netns-dev golang-go golang-gocapability-dev
  golang-golang-x-crypto-dev golang-golang-x-net-dev golang-golang-x-sys-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-objx-dev golang-pretty-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-text-dev golang-x-text-dev groff-base intltool-debian
  libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 liblmdb-dev liblmdb0
  libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5 libsasl2-2 libsasl2-dev
  libsasl2-modules-db libssl1.0.2 libsystemd-dev libtimedate-perl
  libunistring0 libxml2 man-db openssl pkg-config po-debconf protobuf-compiler
  sbuild-build-depends-consul-dummy
0 upgraded, 128 newly installed, 0 to remove and 21 not upgraded.
Need to get 55.3 MB/55.3 MB of archives.
After this operation, 304 MB of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-t23Lpf/apt_archive ./ sbuild-build-depends-consul-dummy 0.invalid.0 [1046 B]
Get:2 http://172.17.0.1/private stretch-staging/main armhf groff-base armhf 1.22.3-7 [1083 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf libbsd0 armhf 0.8.2-1 [88.0 kB]
Get:4 http://172.17.0.1/private stretch-staging/main armhf bsdmainutils armhf 9.0.9 [177 kB]
Get:5 http://172.17.0.1/private stretch-staging/main armhf libpipeline1 armhf 1.4.1-2 [23.7 kB]
Get:6 http://172.17.0.1/private stretch-staging/main armhf man-db armhf 2.7.5-1 [975 kB]
Get:7 http://172.17.0.1/private stretch-staging/main armhf liblmdb0 armhf 0.9.17-3 [36.7 kB]
Get:8 http://172.17.0.1/private stretch-staging/main armhf liblmdb-dev armhf 0.9.17-3 [51.9 kB]
Get:9 http://172.17.0.1/private stretch-staging/main armhf libunistring0 armhf 0.9.3-5.2 [253 kB]
Get:10 http://172.17.0.1/private stretch-staging/main armhf libssl1.0.2 armhf 1.0.2g-1 [886 kB]
Get:11 http://172.17.0.1/private stretch-staging/main armhf libmagic1 armhf 1:5.25-2 [250 kB]
Get:12 http://172.17.0.1/private stretch-staging/main armhf file armhf 1:5.25-2 [61.2 kB]
Get:13 http://172.17.0.1/private stretch-staging/main armhf gettext-base armhf 0.19.7-2 [111 kB]
Get:14 http://172.17.0.1/private stretch-staging/main armhf libsasl2-modules-db armhf 2.1.26.dfsg1-14+b1 [65.8 kB]
Get:15 http://172.17.0.1/private stretch-staging/main armhf libsasl2-2 armhf 2.1.26.dfsg1-14+b1 [97.1 kB]
Get:16 http://172.17.0.1/private stretch-staging/main armhf libicu55 armhf 55.1-7 [7380 kB]
Get:17 http://172.17.0.1/private stretch-staging/main armhf libxml2 armhf 2.9.3+dfsg1-1 [800 kB]
Get:18 http://172.17.0.1/private stretch-staging/main armhf autotools-dev all 20150820.1 [71.7 kB]
Get:19 http://172.17.0.1/private stretch-staging/main armhf openssl armhf 1.0.2g-1 [666 kB]
Get:20 http://172.17.0.1/private stretch-staging/main armhf ca-certificates all 20160104 [200 kB]
Get:21 http://172.17.0.1/private stretch-staging/main armhf libffi6 armhf 3.2.1-4 [18.5 kB]
Get:22 http://172.17.0.1/private stretch-staging/main armhf libglib2.0-0 armhf 2.46.2-3 [2482 kB]
Get:23 http://172.17.0.1/private stretch-staging/main armhf libcroco3 armhf 0.6.11-1 [131 kB]
Get:24 http://172.17.0.1/private stretch-staging/main armhf gettext armhf 0.19.7-2 [1400 kB]
Get:25 http://172.17.0.1/private stretch-staging/main armhf intltool-debian all 0.35.0+20060710.4 [26.3 kB]
Get:26 http://172.17.0.1/private stretch-staging/main armhf po-debconf all 1.0.19 [249 kB]
Get:27 http://172.17.0.1/private stretch-staging/main armhf libarchive-zip-perl all 1.56-2 [94.9 kB]
Get:28 http://172.17.0.1/private stretch-staging/main armhf libfile-stripnondeterminism-perl all 0.016-1 [11.9 kB]
Get:29 http://172.17.0.1/private stretch-staging/main armhf libtimedate-perl all 2.3000-2 [42.2 kB]
Get:30 http://172.17.0.1/private stretch-staging/main armhf dh-strip-nondeterminism all 0.016-1 [6998 B]
Get:31 http://172.17.0.1/private stretch-staging/main armhf debhelper all 9.20160313 [808 kB]
Get:32 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-docker-dev all 1.8.3~ds1-2 [223 kB]
Get:33 http://172.17.0.1/private stretch-staging/main armhf golang-docker-dev all 1.8.3~ds1-2 [32.2 kB]
Get:34 http://172.17.0.1/private stretch-staging/main armhf golang-src armhf 2:1.6-1+rpi1 [6782 kB]
Get:35 http://172.17.0.1/private stretch-staging/main armhf golang-go armhf 2:1.6-1+rpi1 [22.2 MB]
Get:36 http://172.17.0.1/private stretch-staging/main armhf golang-text-dev all 0.0~git20130502-1 [6246 B]
Get:37 http://172.17.0.1/private stretch-staging/main armhf golang-pretty-dev all 0.0~git20130613-1 [7220 B]
Get:38 http://172.17.0.1/private stretch-staging/main armhf golang-github-bmizerany-assert-dev all 0.0~git20120716-1 [3658 B]
Get:39 http://172.17.0.1/private stretch-staging/main armhf golang-github-bitly-go-simplejson-dev all 0.5.0-1 [6916 B]
Get:40 http://172.17.0.1/private stretch-staging/main armhf golang-github-mattn-go-isatty-dev all 0.0.1-1 [3456 B]
Get:41 http://172.17.0.1/private stretch-staging/main armhf libprotobuf9v5 armhf 2.6.1-1.3 [292 kB]
Get:42 http://172.17.0.1/private stretch-staging/main armhf libprotoc9v5 armhf 2.6.1-1.3 [241 kB]
Get:43 http://172.17.0.1/private stretch-staging/main armhf libsasl2-dev armhf 2.1.26.dfsg1-14+b1 [293 kB]
Get:44 http://172.17.0.1/private stretch-staging/main armhf libsystemd-dev armhf 229-2 [211 kB]
Get:45 http://172.17.0.1/private stretch-staging/main armhf pkg-config armhf 0.29-3 [59.0 kB]
Get:46 http://172.17.0.1/private stretch-staging/main armhf protobuf-compiler armhf 2.6.1-1.3 [35.8 kB]
Get:47 http://172.17.0.1/private stretch-staging/main armhf dh-golang all 1.12 [9402 B]
Get:48 http://172.17.0.1/private stretch-staging/main armhf golang-check.v1-dev all 0.0+git20150729.11d3bc7-3 [29.1 kB]
Get:49 http://172.17.0.1/private stretch-staging/main armhf golang-github-codegangsta-cli-dev all 0.0~git20150117-3 [14.4 kB]
Get:50 http://172.17.0.1/private stretch-staging/main armhf golang-codegangsta-cli-dev all 0.0~git20150117-3 [2332 B]
Get:51 http://172.17.0.1/private stretch-staging/main armhf golang-context-dev all 0.0~git20140604.1.14f550f-1 [6280 B]
Get:52 http://172.17.0.1/private stretch-staging/main armhf golang-dbus-dev all 3-1 [39.4 kB]
Get:53 http://172.17.0.1/private stretch-staging/main armhf golang-dns-dev all 0.0~git20151030.0.6a15566-1 [128 kB]
Get:54 http://172.17.0.1/private stretch-staging/main armhf golang-github-agtorre-gocolorize-dev all 1.0.0-1 [7020 B]
Get:55 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-circbuf-dev all 0.0~git20150827.0.bbbad09-1 [3650 B]
Get:56 http://172.17.0.1/private stretch-staging/main armhf golang-github-julienschmidt-httprouter-dev all 1.1-1 [15.5 kB]
Get:57 http://172.17.0.1/private stretch-staging/main armhf golang-x-text-dev all 0+git20151217.cf49866-1 [2096 kB]
Get:58 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-crypto-dev all 1:0.0~git20151201.0.7b85b09-2 [802 kB]
Get:59 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-net-dev all 1:0.0+git20160110.4fd4a9f-1 [514 kB]
Get:60 http://172.17.0.1/private stretch-staging/main armhf golang-goprotobuf-dev armhf 0.0~git20150526-2 [700 kB]
Get:61 http://172.17.0.1/private stretch-staging/main armhf golang-protobuf-extensions-dev all 0+git20150513.fc2b8d3-4 [8694 B]
Get:62 http://172.17.0.1/private stretch-staging/main armhf golang-github-kardianos-osext-dev all 0.0~git20151124.0.10da294-2 [6380 B]
Get:63 http://172.17.0.1/private stretch-staging/main armhf golang-github-bugsnag-panicwrap-dev all 0.0~git20141111-1 [9286 B]
Get:64 http://172.17.0.1/private stretch-staging/main armhf golang-github-juju-loggo-dev all 0.0~git20150527.0.8477fc9-1 [16.4 kB]
Get:65 http://172.17.0.1/private stretch-staging/main armhf golang-github-bradfitz-gomemcache-dev all 0.0~git20141109-1 [9988 B]
Get:66 http://172.17.0.1/private stretch-staging/main armhf golang-github-garyburd-redigo-dev all 0.0~git20150901.0.d8dbe4d-1 [27.8 kB]
Get:67 http://172.17.0.1/private stretch-staging/main armhf golang-github-go-fsnotify-fsnotify-dev all 1.2.9-1 [24.1 kB]
Get:68 http://172.17.0.1/private stretch-staging/main armhf golang-github-robfig-config-dev all 0.0~git20141208-1 [15.2 kB]
Get:69 http://172.17.0.1/private stretch-staging/main armhf golang-github-robfig-pathtree-dev all 0.0~git20140121-1 [5688 B]
Get:70 http://172.17.0.1/private stretch-staging/main armhf golang-github-revel-revel-dev all 0.12.0+dfsg-1 [68.8 kB]
Get:71 http://172.17.0.1/private stretch-staging/main armhf golang-github-bugsnag-bugsnag-go-dev all 1.0.5+dfsg-1 [28.1 kB]
Get:72 http://172.17.0.1/private stretch-staging/main armhf golang-github-getsentry-raven-go-dev all 0.0~git20150721.0.74c334d-1 [14.4 kB]
Get:73 http://172.17.0.1/private stretch-staging/main armhf golang-objx-dev all 0.0~git20140527-4 [20.1 kB]
Get:74 http://172.17.0.1/private stretch-staging/main armhf golang-github-stretchr-testify-dev all 1.0-2 [27.8 kB]
Get:75 http://172.17.0.1/private stretch-staging/main armhf golang-github-stvp-go-udp-testing-dev all 0.0~git20150316.0.abcd331-1 [3654 B]
Get:76 http://172.17.0.1/private stretch-staging/main armhf golang-github-tobi-airbrake-go-dev all 0.0~git20150109-1 [5960 B]
Get:77 http://172.17.0.1/private stretch-staging/main armhf golang-github-sirupsen-logrus-dev all 0.8.7-3 [26.3 kB]
Get:78 http://172.17.0.1/private stretch-staging/main armhf golang-logrus-dev all 0.8.7-3 [3014 B]
Get:79 http://172.17.0.1/private stretch-staging/main armhf golang-github-prometheus-common-dev all 0+git20160104.0a3005b-1 [49.7 kB]
Get:80 http://172.17.0.1/private stretch-staging/main armhf golang-procfs-dev all 0+git20150616.c91d8ee-1 [13.5 kB]
Get:81 http://172.17.0.1/private stretch-staging/main armhf golang-prometheus-client-dev all 0.7.0+ds-3 [88.4 kB]
Get:82 http://172.17.0.1/private stretch-staging/main armhf golang-github-datadog-datadog-go-dev all 0.0~git20150930.0.b050cd8-1 [7034 B]
Get:83 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-metrics-dev all 0.0~git20151207.0.06b6099-1 [13.0 kB]
Get:84 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-radix-dev all 0.0~git20150602.0.fbd82e8-1 [6472 B]
Get:85 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-gomdb-dev all 0.0~git20150106.0.151f2e0-1 [7438 B]
Get:86 http://172.17.0.1/private stretch-staging/main armhf golang-github-bgentry-speakeasy-dev all 0.0~git20150902.0.36e9cfd-1 [4632 B]
Get:87 http://172.17.0.1/private stretch-staging/main armhf golang-github-boltdb-bolt-dev all 1.1.0-1 [55.3 kB]
Get:88 http://172.17.0.1/private stretch-staging/main armhf golang-github-coreos-go-systemd-dev all 5-1 [31.8 kB]
Get:89 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-go-units-dev all 0.3.0-1 [11.8 kB]
Get:90 http://172.17.0.1/private stretch-staging/main armhf golang-github-elazarl-go-bindata-assetfs-dev all 0.0~git20151224.0.57eb5e1-1 [5088 B]
Get:91 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-specs-dev all 0.0~git20150829.0.e9cb564-1 [5492 B]
Get:92 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netns-dev all 0.0~git20150710.0.604eaf1-1 [5448 B]
Get:93 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netlink-dev all 0.0~git20160306.0.4fdf23c-1 [50.5 kB]
Get:94 http://172.17.0.1/private stretch-staging/main armhf golang-gocapability-dev all 0.0~git20150506.1.66ef2aa-1 [10.8 kB]
Get:95 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-runc-dev all 0.0.8+dfsg-1 [123 kB]
Get:96 http://172.17.0.1/private stretch-staging/main armhf golang-github-gorilla-mux-dev all 0.0~git20150814.0.f7b6aaa-1 [25.0 kB]
Get:97 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-sys-dev all 0.0~git20150612-1 [171 kB]
Get:98 http://172.17.0.1/private stretch-staging/main armhf golang-github-fsouza-go-dockerclient-dev all 0.0+git20160316-1 [170 kB]
Get:99 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-errwrap-dev all 0.0~git20141028.0.7554cd9-1 [9692 B]
Get:100 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-cleanhttp-dev all 0.0~git20160217.0.875fb67-1 [8256 B]
Get:101 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-checkpoint-dev all 0.0~git20151022.0.e4b2dc3-1 [11.2 kB]
Get:102 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-golang-lru-dev all 0.0~git20160207.0.a0d98a5-1 [12.9 kB]
Get:103 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-uuid-dev all 0.0~git20160218.0.6994546-1 [7306 B]
Get:104 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-immutable-radix-dev all 0.0~git20160222.0.8e8ed81-1 [13.4 kB]
Get:105 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-memdb-dev all 0.0~git20160301.0.98f52f5-1 [19.1 kB]
Get:106 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-msgpack-dev all 0.0~git20130605.792643-1 [20.3 kB]
Get:107 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-codec-dev all 0.0~git20151130.0.357a44b-1 [127 kB]
Get:108 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-vmihailenco-msgpack.v2-dev all 2.4.11-1 [17.9 kB]
Get:109 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-tomb.v2-dev all 0.0~git20140626.14b3d72-1 [5140 B]
Get:110 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-mgo.v2-dev all 2015.12.06-1 [138 kB]
Get:111 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-msgpack-dev all 0.0~git20150518-1 [42.1 kB]
Get:112 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-multierror-dev all 0.0~git20150916.0.d30f099-1 [9274 B]
Get:113 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-reap-dev all 0.0~git20160113.0.2d85522-1 [9084 B]
Get:114 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-syslog-dev all 0.0~git20150218.0.42a2b57-1 [5336 B]
Get:115 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-hcl-dev all 0.0~git20151110.0.fa160f1-1 [42.7 kB]
Get:116 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-logutils-dev all 0.0~git20150609.0.0dc08b1-1 [8150 B]
Get:117 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-mdns-dev all 0.0~git20150317.0.2b439d3-1 [10.9 kB]
Get:118 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-memberlist-dev all 0.0~git20160225.0.ae9a8d9-1 [48.7 kB]
Get:119 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-net-rpc-msgpackrpc-dev all 0.0~git20151116.0.a14192a-1 [4168 B]
Get:120 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-dev all 0.0~git20160317.0.3359516-1 [52.2 kB]
Get:121 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-boltdb-dev all 0.0~git20150201.d1e82c1-1 [9744 B]
Get:122 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-yamux-dev all 0.0~git20151129.0.df94978-1 [20.0 kB]
Get:123 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-scada-client-dev all 0.0~git20150828.0.84989fd-1 [17.4 kB]
Get:124 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-cli-dev all 0.0~git20160203.0.5c87c51-1 [16.9 kB]
Get:125 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-mapstructure-dev all 0.0~git20150717.0.281073e-2 [14.4 kB]
Get:126 http://172.17.0.1/private stretch-staging/main armhf golang-github-ryanuber-columnize-dev all 2.1.0-1 [5140 B]
Get:127 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-serf-dev all 0.7.0~ds1-1 [110 kB]
Get:128 http://172.17.0.1/private stretch-staging/main armhf golang-github-inconshreveable-muxado-dev all 0.0~git20140312.0.f693c7e-1 [26.4 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 55.3 MB in 17s (3234 kB/s)
Selecting previously unselected package groff-base.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 12648 files and directories currently installed.)
Preparing to unpack .../groff-base_1.22.3-7_armhf.deb ...
Unpacking groff-base (1.22.3-7) ...
Selecting previously unselected package libbsd0:armhf.
Preparing to unpack .../libbsd0_0.8.2-1_armhf.deb ...
Unpacking libbsd0:armhf (0.8.2-1) ...
Selecting previously unselected package bsdmainutils.
Preparing to unpack .../bsdmainutils_9.0.9_armhf.deb ...
Unpacking bsdmainutils (9.0.9) ...
Selecting previously unselected package libpipeline1:armhf.
Preparing to unpack .../libpipeline1_1.4.1-2_armhf.deb ...
Unpacking libpipeline1:armhf (1.4.1-2) ...
Selecting previously unselected package man-db.
Preparing to unpack .../man-db_2.7.5-1_armhf.deb ...
Unpacking man-db (2.7.5-1) ...
Selecting previously unselected package liblmdb0:armhf.
Preparing to unpack .../liblmdb0_0.9.17-3_armhf.deb ...
Unpacking liblmdb0:armhf (0.9.17-3) ...
Selecting previously unselected package liblmdb-dev:armhf.
Preparing to unpack .../liblmdb-dev_0.9.17-3_armhf.deb ...
Unpacking liblmdb-dev:armhf (0.9.17-3) ...
Selecting previously unselected package libunistring0:armhf.
Preparing to unpack .../libunistring0_0.9.3-5.2_armhf.deb ...
Unpacking libunistring0:armhf (0.9.3-5.2) ...
Selecting previously unselected package libssl1.0.2:armhf.
Preparing to unpack .../libssl1.0.2_1.0.2g-1_armhf.deb ...
Unpacking libssl1.0.2:armhf (1.0.2g-1) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../libmagic1_1%3a5.25-2_armhf.deb ...
Unpacking libmagic1:armhf (1:5.25-2) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2_armhf.deb ...
Unpacking file (1:5.25-2) ...
Selecting previously unselected package gettext-base.
Preparing to unpack .../gettext-base_0.19.7-2_armhf.deb ...
Unpacking gettext-base (0.19.7-2) ...
Selecting previously unselected package libsasl2-modules-db:armhf.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-modules-db:armhf (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package libsasl2-2:armhf.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-2:armhf (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package libicu55:armhf.
Preparing to unpack .../libicu55_55.1-7_armhf.deb ...
Unpacking libicu55:armhf (55.1-7) ...
Selecting previously unselected package libxml2:armhf.
Preparing to unpack .../libxml2_2.9.3+dfsg1-1_armhf.deb ...
Unpacking libxml2:armhf (2.9.3+dfsg1-1) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../autotools-dev_20150820.1_all.deb ...
Unpacking autotools-dev (20150820.1) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1_armhf.deb ...
Unpacking openssl (1.0.2g-1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104_all.deb ...
Unpacking ca-certificates (20160104) ...
Selecting previously unselected package libffi6:armhf.
Preparing to unpack .../libffi6_3.2.1-4_armhf.deb ...
Unpacking libffi6:armhf (3.2.1-4) ...
Selecting previously unselected package libglib2.0-0:armhf.
Preparing to unpack .../libglib2.0-0_2.46.2-3_armhf.deb ...
Unpacking libglib2.0-0:armhf (2.46.2-3) ...
Selecting previously unselected package libcroco3:armhf.
Preparing to unpack .../libcroco3_0.6.11-1_armhf.deb ...
Unpacking libcroco3:armhf (0.6.11-1) ...
Selecting previously unselected package gettext.
Preparing to unpack .../gettext_0.19.7-2_armhf.deb ...
Unpacking gettext (0.19.7-2) ...
Selecting previously unselected package intltool-debian.
Preparing to unpack .../intltool-debian_0.35.0+20060710.4_all.deb ...
Unpacking intltool-debian (0.35.0+20060710.4) ...
Selecting previously unselected package po-debconf.
Preparing to unpack .../po-debconf_1.0.19_all.deb ...
Unpacking po-debconf (1.0.19) ...
Selecting previously unselected package libarchive-zip-perl.
Preparing to unpack .../libarchive-zip-perl_1.56-2_all.deb ...
Unpacking libarchive-zip-perl (1.56-2) ...
Selecting previously unselected package libfile-stripnondeterminism-perl.
Preparing to unpack .../libfile-stripnondeterminism-perl_0.016-1_all.deb ...
Unpacking libfile-stripnondeterminism-perl (0.016-1) ...
Selecting previously unselected package libtimedate-perl.
Preparing to unpack .../libtimedate-perl_2.3000-2_all.deb ...
Unpacking libtimedate-perl (2.3000-2) ...
Selecting previously unselected package dh-strip-nondeterminism.
Preparing to unpack .../dh-strip-nondeterminism_0.016-1_all.deb ...
Unpacking dh-strip-nondeterminism (0.016-1) ...
Selecting previously unselected package debhelper.
Preparing to unpack .../debhelper_9.20160313_all.deb ...
Unpacking debhelper (9.20160313) ...
Selecting previously unselected package golang-github-docker-docker-dev.
Preparing to unpack .../golang-github-docker-docker-dev_1.8.3~ds1-2_all.deb ...
Unpacking golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Selecting previously unselected package golang-docker-dev.
Preparing to unpack .../golang-docker-dev_1.8.3~ds1-2_all.deb ...
Unpacking golang-docker-dev (1.8.3~ds1-2) ...
Selecting previously unselected package golang-src.
Preparing to unpack .../golang-src_2%3a1.6-1+rpi1_armhf.deb ...
Unpacking golang-src (2:1.6-1+rpi1) ...
Selecting previously unselected package golang-go.
Preparing to unpack .../golang-go_2%3a1.6-1+rpi1_armhf.deb ...
Unpacking golang-go (2:1.6-1+rpi1) ...
Selecting previously unselected package golang-text-dev.
Preparing to unpack .../golang-text-dev_0.0~git20130502-1_all.deb ...
Unpacking golang-text-dev (0.0~git20130502-1) ...
Selecting previously unselected package golang-pretty-dev.
Preparing to unpack .../golang-pretty-dev_0.0~git20130613-1_all.deb ...
Unpacking golang-pretty-dev (0.0~git20130613-1) ...
Selecting previously unselected package golang-github-bmizerany-assert-dev.
Preparing to unpack .../golang-github-bmizerany-assert-dev_0.0~git20120716-1_all.deb ...
Unpacking golang-github-bmizerany-assert-dev (0.0~git20120716-1) ...
Selecting previously unselected package golang-github-bitly-go-simplejson-dev.
Preparing to unpack .../golang-github-bitly-go-simplejson-dev_0.5.0-1_all.deb ...
Unpacking golang-github-bitly-go-simplejson-dev (0.5.0-1) ...
Selecting previously unselected package golang-github-mattn-go-isatty-dev.
Preparing to unpack .../golang-github-mattn-go-isatty-dev_0.0.1-1_all.deb ...
Unpacking golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Selecting previously unselected package libprotobuf9v5:armhf.
Preparing to unpack .../libprotobuf9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotobuf9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libprotoc9v5:armhf.
Preparing to unpack .../libprotoc9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotoc9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libsasl2-dev.
Preparing to unpack .../libsasl2-dev_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-dev (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package libsystemd-dev:armhf.
Preparing to unpack .../libsystemd-dev_229-2_armhf.deb ...
Unpacking libsystemd-dev:armhf (229-2) ...
Selecting previously unselected package pkg-config.
Preparing to unpack .../pkg-config_0.29-3_armhf.deb ...
Unpacking pkg-config (0.29-3) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../protobuf-compiler_2.6.1-1.3_armhf.deb ...
Unpacking protobuf-compiler (2.6.1-1.3) ...
Selecting previously unselected package dh-golang.
Preparing to unpack .../dh-golang_1.12_all.deb ...
Unpacking dh-golang (1.12) ...
Selecting previously unselected package golang-check.v1-dev.
Preparing to unpack .../golang-check.v1-dev_0.0+git20150729.11d3bc7-3_all.deb ...
Unpacking golang-check.v1-dev (0.0+git20150729.11d3bc7-3) ...
Selecting previously unselected package golang-github-codegangsta-cli-dev.
Preparing to unpack .../golang-github-codegangsta-cli-dev_0.0~git20150117-3_all.deb ...
Unpacking golang-github-codegangsta-cli-dev (0.0~git20150117-3) ...
Selecting previously unselected package golang-codegangsta-cli-dev.
Preparing to unpack .../golang-codegangsta-cli-dev_0.0~git20150117-3_all.deb ...
Unpacking golang-codegangsta-cli-dev (0.0~git20150117-3) ...
Selecting previously unselected package golang-context-dev.
Preparing to unpack .../golang-context-dev_0.0~git20140604.1.14f550f-1_all.deb ...
Unpacking golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Selecting previously unselected package golang-dbus-dev.
Preparing to unpack .../golang-dbus-dev_3-1_all.deb ...
Unpacking golang-dbus-dev (3-1) ...
Selecting previously unselected package golang-dns-dev.
Preparing to unpack .../golang-dns-dev_0.0~git20151030.0.6a15566-1_all.deb ...
Unpacking golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Selecting previously unselected package golang-github-agtorre-gocolorize-dev.
Preparing to unpack .../golang-github-agtorre-gocolorize-dev_1.0.0-1_all.deb ...
Unpacking golang-github-agtorre-gocolorize-dev (1.0.0-1) ...
Selecting previously unselected package golang-github-armon-circbuf-dev.
Preparing to unpack .../golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1_all.deb ...
Unpacking golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Selecting previously unselected package golang-github-julienschmidt-httprouter-dev.
Preparing to unpack .../golang-github-julienschmidt-httprouter-dev_1.1-1_all.deb ...
Unpacking golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Selecting previously unselected package golang-x-text-dev.
Preparing to unpack .../golang-x-text-dev_0+git20151217.cf49866-1_all.deb ...
Unpacking golang-x-text-dev (0+git20151217.cf49866-1) ...
Selecting previously unselected package golang-golang-x-crypto-dev.
Preparing to unpack .../golang-golang-x-crypto-dev_1%3a0.0~git20151201.0.7b85b09-2_all.deb ...
Unpacking golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Selecting previously unselected package golang-golang-x-net-dev.
Preparing to unpack .../golang-golang-x-net-dev_1%3a0.0+git20160110.4fd4a9f-1_all.deb ...
Unpacking golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Selecting previously unselected package golang-goprotobuf-dev.
Preparing to unpack .../golang-goprotobuf-dev_0.0~git20150526-2_armhf.deb ...
Unpacking golang-goprotobuf-dev (0.0~git20150526-2) ...
Selecting previously unselected package golang-protobuf-extensions-dev.
Preparing to unpack .../golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4_all.deb ...
Unpacking golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Selecting previously unselected package golang-github-kardianos-osext-dev.
Preparing to unpack .../golang-github-kardianos-osext-dev_0.0~git20151124.0.10da294-2_all.deb ...
Unpacking golang-github-kardianos-osext-dev (0.0~git20151124.0.10da294-2) ...
Selecting previously unselected package golang-github-bugsnag-panicwrap-dev.
Preparing to unpack .../golang-github-bugsnag-panicwrap-dev_0.0~git20141111-1_all.deb ...
Unpacking golang-github-bugsnag-panicwrap-dev (0.0~git20141111-1) ...
Selecting previously unselected package golang-github-juju-loggo-dev.
Preparing to unpack .../golang-github-juju-loggo-dev_0.0~git20150527.0.8477fc9-1_all.deb ...
Unpacking golang-github-juju-loggo-dev (0.0~git20150527.0.8477fc9-1) ...
Selecting previously unselected package golang-github-bradfitz-gomemcache-dev.
Preparing to unpack .../golang-github-bradfitz-gomemcache-dev_0.0~git20141109-1_all.deb ...
Unpacking golang-github-bradfitz-gomemcache-dev (0.0~git20141109-1) ...
Selecting previously unselected package golang-github-garyburd-redigo-dev.
Preparing to unpack .../golang-github-garyburd-redigo-dev_0.0~git20150901.0.d8dbe4d-1_all.deb ...
Unpacking golang-github-garyburd-redigo-dev (0.0~git20150901.0.d8dbe4d-1) ...
Selecting previously unselected package golang-github-go-fsnotify-fsnotify-dev.
Preparing to unpack .../golang-github-go-fsnotify-fsnotify-dev_1.2.9-1_all.deb ...
Unpacking golang-github-go-fsnotify-fsnotify-dev (1.2.9-1) ...
Selecting previously unselected package golang-github-robfig-config-dev.
Preparing to unpack .../golang-github-robfig-config-dev_0.0~git20141208-1_all.deb ...
Unpacking golang-github-robfig-config-dev (0.0~git20141208-1) ...
Selecting previously unselected package golang-github-robfig-pathtree-dev.
Preparing to unpack .../golang-github-robfig-pathtree-dev_0.0~git20140121-1_all.deb ...
Unpacking golang-github-robfig-pathtree-dev (0.0~git20140121-1) ...
Selecting previously unselected package golang-github-revel-revel-dev.
Preparing to unpack .../golang-github-revel-revel-dev_0.12.0+dfsg-1_all.deb ...
Unpacking golang-github-revel-revel-dev (0.12.0+dfsg-1) ...
Selecting previously unselected package golang-github-bugsnag-bugsnag-go-dev.
Preparing to unpack .../golang-github-bugsnag-bugsnag-go-dev_1.0.5+dfsg-1_all.deb ...
Unpacking golang-github-bugsnag-bugsnag-go-dev (1.0.5+dfsg-1) ...
Selecting previously unselected package golang-github-getsentry-raven-go-dev.
Preparing to unpack .../golang-github-getsentry-raven-go-dev_0.0~git20150721.0.74c334d-1_all.deb ...
Unpacking golang-github-getsentry-raven-go-dev (0.0~git20150721.0.74c334d-1) ...
Selecting previously unselected package golang-objx-dev.
Preparing to unpack .../golang-objx-dev_0.0~git20140527-4_all.deb ...
Unpacking golang-objx-dev (0.0~git20140527-4) ...
Selecting previously unselected package golang-github-stretchr-testify-dev.
Preparing to unpack .../golang-github-stretchr-testify-dev_1.0-2_all.deb ...
Unpacking golang-github-stretchr-testify-dev (1.0-2) ...
Selecting previously unselected package golang-github-stvp-go-udp-testing-dev.
Preparing to unpack .../golang-github-stvp-go-udp-testing-dev_0.0~git20150316.0.abcd331-1_all.deb ...
Unpacking golang-github-stvp-go-udp-testing-dev (0.0~git20150316.0.abcd331-1) ...
Selecting previously unselected package golang-github-tobi-airbrake-go-dev.
Preparing to unpack .../golang-github-tobi-airbrake-go-dev_0.0~git20150109-1_all.deb ...
Unpacking golang-github-tobi-airbrake-go-dev (0.0~git20150109-1) ...
Selecting previously unselected package golang-github-sirupsen-logrus-dev.
Preparing to unpack .../golang-github-sirupsen-logrus-dev_0.8.7-3_all.deb ...
Unpacking golang-github-sirupsen-logrus-dev (0.8.7-3) ...
Selecting previously unselected package golang-logrus-dev.
Preparing to unpack .../golang-logrus-dev_0.8.7-3_all.deb ...
Unpacking golang-logrus-dev (0.8.7-3) ...
Selecting previously unselected package golang-github-prometheus-common-dev.
Preparing to unpack .../golang-github-prometheus-common-dev_0+git20160104.0a3005b-1_all.deb ...
Unpacking golang-github-prometheus-common-dev (0+git20160104.0a3005b-1) ...
Selecting previously unselected package golang-procfs-dev.
Preparing to unpack .../golang-procfs-dev_0+git20150616.c91d8ee-1_all.deb ...
Unpacking golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Selecting previously unselected package golang-prometheus-client-dev.
Preparing to unpack .../golang-prometheus-client-dev_0.7.0+ds-3_all.deb ...
Unpacking golang-prometheus-client-dev (0.7.0+ds-3) ...
Selecting previously unselected package golang-github-datadog-datadog-go-dev.
Preparing to unpack .../golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1_all.deb ...
Unpacking golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Selecting previously unselected package golang-github-armon-go-metrics-dev.
Preparing to unpack .../golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1_all.deb ...
Unpacking golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Selecting previously unselected package golang-github-armon-go-radix-dev.
Preparing to unpack .../golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1_all.deb ...
Unpacking golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Selecting previously unselected package golang-github-armon-gomdb-dev.
Preparing to unpack .../golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1_all.deb ...
Unpacking golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Selecting previously unselected package golang-github-bgentry-speakeasy-dev.
Preparing to unpack .../golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1_all.deb ...
Unpacking golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Selecting previously unselected package golang-github-boltdb-bolt-dev.
Preparing to unpack .../golang-github-boltdb-bolt-dev_1.1.0-1_all.deb ...
Unpacking golang-github-boltdb-bolt-dev (1.1.0-1) ...
Selecting previously unselected package golang-github-coreos-go-systemd-dev.
Preparing to unpack .../golang-github-coreos-go-systemd-dev_5-1_all.deb ...
Unpacking golang-github-coreos-go-systemd-dev (5-1) ...
Selecting previously unselected package golang-github-docker-go-units-dev.
Preparing to unpack .../golang-github-docker-go-units-dev_0.3.0-1_all.deb ...
Unpacking golang-github-docker-go-units-dev (0.3.0-1) ...
Selecting previously unselected package golang-github-elazarl-go-bindata-assetfs-dev.
Preparing to unpack .../golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1_all.deb ...
Unpacking golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Selecting previously unselected package golang-github-opencontainers-specs-dev.
Preparing to unpack .../golang-github-opencontainers-specs-dev_0.0~git20150829.0.e9cb564-1_all.deb ...
Unpacking golang-github-opencontainers-specs-dev (0.0~git20150829.0.e9cb564-1) ...
Selecting previously unselected package golang-github-vishvananda-netns-dev.
Preparing to unpack .../golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1_all.deb ...
Unpacking golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Selecting previously unselected package golang-github-vishvananda-netlink-dev.
Preparing to unpack .../golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1_all.deb ...
Unpacking golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Selecting previously unselected package golang-gocapability-dev.
Preparing to unpack .../golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1_all.deb ...
Unpacking golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Selecting previously unselected package golang-github-opencontainers-runc-dev.
Preparing to unpack .../golang-github-opencontainers-runc-dev_0.0.8+dfsg-1_all.deb ...
Unpacking golang-github-opencontainers-runc-dev (0.0.8+dfsg-1) ...
Selecting previously unselected package golang-github-gorilla-mux-dev.
Preparing to unpack .../golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1_all.deb ...
Unpacking golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Selecting previously unselected package golang-golang-x-sys-dev.
Preparing to unpack .../golang-golang-x-sys-dev_0.0~git20150612-1_all.deb ...
Unpacking golang-golang-x-sys-dev (0.0~git20150612-1) ...
Selecting previously unselected package golang-github-fsouza-go-dockerclient-dev.
Preparing to unpack .../golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-1_all.deb ...
Unpacking golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-1) ...
Selecting previously unselected package golang-github-hashicorp-errwrap-dev.
Preparing to unpack .../golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1_all.deb ...
Unpacking golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Selecting previously unselected package golang-github-hashicorp-go-cleanhttp-dev.
Preparing to unpack .../golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1_all.deb ...
Unpacking golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Selecting previously unselected package golang-github-hashicorp-go-checkpoint-dev.
Preparing to unpack .../golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1_all.deb ...
Unpacking golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Selecting previously unselected package golang-github-hashicorp-golang-lru-dev.
Preparing to unpack .../golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1_all.deb ...
Unpacking golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Selecting previously unselected package golang-github-hashicorp-uuid-dev.
Preparing to unpack .../golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1_all.deb ...
Unpacking golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Selecting previously unselected package golang-github-hashicorp-go-immutable-radix-dev.
Preparing to unpack .../golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1_all.deb ...
Unpacking golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Selecting previously unselected package golang-github-hashicorp-go-memdb-dev.
Preparing to unpack .../golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1_all.deb ...
Unpacking golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Selecting previously unselected package golang-github-ugorji-go-msgpack-dev.
Preparing to unpack .../golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1_all.deb ...
Unpacking golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Selecting previously unselected package golang-github-ugorji-go-codec-dev.
Preparing to unpack .../golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1_all.deb ...
Unpacking golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Selecting previously unselected package golang-gopkg-vmihailenco-msgpack.v2-dev.
Preparing to unpack .../golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.11-1_all.deb ...
Unpacking golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.11-1) ...
Selecting previously unselected package golang-gopkg-tomb.v2-dev.
Preparing to unpack .../golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1_all.deb ...
Unpacking golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Selecting previously unselected package golang-gopkg-mgo.v2-dev.
Preparing to unpack .../golang-gopkg-mgo.v2-dev_2015.12.06-1_all.deb ...
Unpacking golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Selecting previously unselected package golang-github-hashicorp-go-msgpack-dev.
Preparing to unpack .../golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1_all.deb ...
Unpacking golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Selecting previously unselected package golang-github-hashicorp-go-multierror-dev.
Preparing to unpack .../golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1_all.deb ...
Unpacking golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Selecting previously unselected package golang-github-hashicorp-go-reap-dev.
Preparing to unpack .../golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1_all.deb ...
Unpacking golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Selecting previously unselected package golang-github-hashicorp-go-syslog-dev.
Preparing to unpack .../golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1_all.deb ...
Unpacking golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Selecting previously unselected package golang-github-hashicorp-hcl-dev.
Preparing to unpack .../golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1_all.deb ...
Unpacking golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Selecting previously unselected package golang-github-hashicorp-logutils-dev.
Preparing to unpack .../golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1_all.deb ...
Unpacking golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Selecting previously unselected package golang-github-hashicorp-mdns-dev.
Preparing to unpack .../golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1_all.deb ...
Unpacking golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Selecting previously unselected package golang-github-hashicorp-memberlist-dev.
Preparing to unpack .../golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1_all.deb ...
Unpacking golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Selecting previously unselected package golang-github-hashicorp-net-rpc-msgpackrpc-dev.
Preparing to unpack .../golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1_all.deb ...
Unpacking golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-dev.
Preparing to unpack .../golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1_all.deb ...
Unpacking golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-boltdb-dev.
Preparing to unpack .../golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1_all.deb ...
Unpacking golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Selecting previously unselected package golang-github-hashicorp-yamux-dev.
Preparing to unpack .../golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1_all.deb ...
Unpacking golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Selecting previously unselected package golang-github-hashicorp-scada-client-dev.
Preparing to unpack .../golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1_all.deb ...
Unpacking golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Selecting previously unselected package golang-github-mitchellh-cli-dev.
Preparing to unpack .../golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1_all.deb ...
Unpacking golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Selecting previously unselected package golang-github-mitchellh-mapstructure-dev.
Preparing to unpack .../golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2_all.deb ...
Unpacking golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Selecting previously unselected package golang-github-ryanuber-columnize-dev.
Preparing to unpack .../golang-github-ryanuber-columnize-dev_2.1.0-1_all.deb ...
Unpacking golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Selecting previously unselected package golang-github-hashicorp-serf-dev.
Preparing to unpack .../golang-github-hashicorp-serf-dev_0.7.0~ds1-1_all.deb ...
Unpacking golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Selecting previously unselected package golang-github-inconshreveable-muxado-dev.
Preparing to unpack .../golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1_all.deb ...
Unpacking golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Selecting previously unselected package sbuild-build-depends-consul-dummy.
Preparing to unpack .../sbuild-build-depends-consul-dummy.deb ...
Unpacking sbuild-build-depends-consul-dummy (0.invalid.0) ...
Processing triggers for libc-bin (2.22-3) ...
Setting up groff-base (1.22.3-7) ...
Setting up libbsd0:armhf (0.8.2-1) ...
Setting up bsdmainutils (9.0.9) ...
update-alternatives: using /usr/bin/bsd-write to provide /usr/bin/write (write) in auto mode
update-alternatives: using /usr/bin/bsd-from to provide /usr/bin/from (from) in auto mode
Setting up libpipeline1:armhf (1.4.1-2) ...
Setting up man-db (2.7.5-1) ...
Not building database; man-db/auto-update is not 'true'.
Setting up liblmdb0:armhf (0.9.17-3) ...
Setting up liblmdb-dev:armhf (0.9.17-3) ...
Setting up libunistring0:armhf (0.9.3-5.2) ...
Setting up libssl1.0.2:armhf (1.0.2g-1) ...
Setting up libmagic1:armhf (1:5.25-2) ...
Setting up file (1:5.25-2) ...
Setting up gettext-base (0.19.7-2) ...
Setting up libsasl2-modules-db:armhf (2.1.26.dfsg1-14+b1) ...
Setting up libsasl2-2:armhf (2.1.26.dfsg1-14+b1) ...
Setting up libicu55:armhf (55.1-7) ...
Setting up libxml2:armhf (2.9.3+dfsg1-1) ...
Setting up autotools-dev (20150820.1) ...
Setting up openssl (1.0.2g-1) ...
Setting up ca-certificates (20160104) ...
Setting up libffi6:armhf (3.2.1-4) ...
Setting up libglib2.0-0:armhf (2.46.2-3) ...
No schema files found: doing nothing.
Setting up libcroco3:armhf (0.6.11-1) ...
Setting up gettext (0.19.7-2) ...
Setting up intltool-debian (0.35.0+20060710.4) ...
Setting up po-debconf (1.0.19) ...
Setting up libarchive-zip-perl (1.56-2) ...
Setting up libfile-stripnondeterminism-perl (0.016-1) ...
Setting up libtimedate-perl (2.3000-2) ...
Setting up golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Setting up golang-docker-dev (1.8.3~ds1-2) ...
Setting up golang-src (2:1.6-1+rpi1) ...
Setting up golang-go (2:1.6-1+rpi1) ...
update-alternatives: using /usr/lib/go/bin/go to provide /usr/bin/go (go) in auto mode
Setting up golang-text-dev (0.0~git20130502-1) ...
Setting up golang-pretty-dev (0.0~git20130613-1) ...
Setting up golang-github-bmizerany-assert-dev (0.0~git20120716-1) ...
Setting up golang-github-bitly-go-simplejson-dev (0.5.0-1) ...
Setting up golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Setting up libprotobuf9v5:armhf (2.6.1-1.3) ...
Setting up libprotoc9v5:armhf (2.6.1-1.3) ...
Setting up libsasl2-dev (2.1.26.dfsg1-14+b1) ...
Setting up libsystemd-dev:armhf (229-2) ...
Setting up pkg-config (0.29-3) ...
Setting up protobuf-compiler (2.6.1-1.3) ...
Setting up golang-check.v1-dev (0.0+git20150729.11d3bc7-3) ...
Setting up golang-github-codegangsta-cli-dev (0.0~git20150117-3) ...
Setting up golang-codegangsta-cli-dev (0.0~git20150117-3) ...
Setting up golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Setting up golang-dbus-dev (3-1) ...
Setting up golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Setting up golang-github-agtorre-gocolorize-dev (1.0.0-1) ...
Setting up golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Setting up golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Setting up golang-x-text-dev (0+git20151217.cf49866-1) ...
Setting up golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Setting up golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Setting up golang-goprotobuf-dev (0.0~git20150526-2) ...
Setting up golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Setting up golang-github-kardianos-osext-dev (0.0~git20151124.0.10da294-2) ...
Setting up golang-github-bugsnag-panicwrap-dev (0.0~git20141111-1) ...
Setting up golang-github-juju-loggo-dev (0.0~git20150527.0.8477fc9-1) ...
Setting up golang-github-bradfitz-gomemcache-dev (0.0~git20141109-1) ...
Setting up golang-github-garyburd-redigo-dev (0.0~git20150901.0.d8dbe4d-1) ...
Setting up golang-github-go-fsnotify-fsnotify-dev (1.2.9-1) ...
Setting up golang-github-robfig-config-dev (0.0~git20141208-1) ...
Setting up golang-github-robfig-pathtree-dev (0.0~git20140121-1) ...
Setting up golang-github-revel-revel-dev (0.12.0+dfsg-1) ...
Setting up golang-github-bugsnag-bugsnag-go-dev (1.0.5+dfsg-1) ...
Setting up golang-github-getsentry-raven-go-dev (0.0~git20150721.0.74c334d-1) ...
Setting up golang-objx-dev (0.0~git20140527-4) ...
Setting up golang-github-stretchr-testify-dev (1.0-2) ...
Setting up golang-github-stvp-go-udp-testing-dev (0.0~git20150316.0.abcd331-1) ...
Setting up golang-github-tobi-airbrake-go-dev (0.0~git20150109-1) ...
Setting up golang-github-sirupsen-logrus-dev (0.8.7-3) ...
Setting up golang-logrus-dev (0.8.7-3) ...
Setting up golang-github-prometheus-common-dev (0+git20160104.0a3005b-1) ...
Setting up golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Setting up golang-prometheus-client-dev (0.7.0+ds-3) ...
Setting up golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Setting up golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Setting up golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Setting up golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Setting up golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Setting up golang-github-boltdb-bolt-dev (1.1.0-1) ...
Setting up golang-github-coreos-go-systemd-dev (5-1) ...
Setting up golang-github-docker-go-units-dev (0.3.0-1) ...
Setting up golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Setting up golang-github-opencontainers-specs-dev (0.0~git20150829.0.e9cb564-1) ...
Setting up golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Setting up golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Setting up golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Setting up golang-github-opencontainers-runc-dev (0.0.8+dfsg-1) ...
Setting up golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Setting up golang-golang-x-sys-dev (0.0~git20150612-1) ...
Setting up golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-1) ...
Setting up golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Setting up golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Setting up golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Setting up golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Setting up golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Setting up golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Setting up golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Setting up golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Setting up golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Setting up golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.11-1) ...
Setting up golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Setting up golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Setting up golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Setting up golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Setting up golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Setting up golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Setting up golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Setting up golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Setting up golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Setting up golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Setting up golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Setting up golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Setting up golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Setting up golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Setting up golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Setting up golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Setting up golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Setting up golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Setting up golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Setting up golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Setting up debhelper (9.20160313) ...
Setting up dh-golang (1.12) ...
Setting up sbuild-build-depends-consul-dummy (0.invalid.0) ...
Setting up dh-strip-nondeterminism (0.016-1) ...
Processing triggers for libc-bin (2.22-3) ...
Processing triggers for ca-certificates (20160104) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Build environment                                                            |
+------------------------------------------------------------------------------+

Kernel: Linux 3.19.0-trunk-armmp armhf (armv7l)
Toolchain package versions: binutils_2.26-5 dpkg-dev_1.18.4 g++-5_5.3.1-11 gcc-5_5.3.1-11 libc6-dev_2.22-3 libstdc++-5-dev_5.3.1-11 libstdc++6_5.3.1-11 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch
Package versions: adduser_3.114 apt_1.2.6 autotools-dev_20150820.1 base-files_9.5+rpi1 base-passwd_3.5.39 bash_4.3-14 binutils_2.26-5 bsdmainutils_9.0.9 bsdutils_1:2.27.1-6 build-essential_11.7 bzip2_1.0.6-8 ca-certificates_20160104 coreutils_8.25-2 cpio_2.11+dfsg-5 cpp_4:5.3.1-1+rpi1 cpp-5_5.3.1-11 dash_0.5.8-2.1 debconf_1.5.59 debfoster_2.7-2 debhelper_9.20160313 debianutils_4.7 dh-golang_1.12 dh-strip-nondeterminism_0.016-1 diffutils_1:3.3-3 dmsetup_2:1.02.116-1 dpkg_1.18.4 dpkg-dev_1.18.4 e2fslibs_1.42.13-1 e2fsprogs_1.42.13-1 fakeroot_1.20.2-1 file_1:5.25-2 findutils_4.6.0+git+20160126-2 g++_4:5.3.1-1+rpi1 g++-5_5.3.1-11 gcc_4:5.3.1-1+rpi1 gcc-4.6-base_4.6.4-5+rpi1 gcc-4.7-base_4.7.3-11+rpi1 gcc-4.8-base_4.8.5-4 gcc-4.9-base_4.9.3-12 gcc-5_5.3.1-11 gcc-5-base_5.3.1-11 gettext_0.19.7-2 gettext-base_0.19.7-2 gnupg_1.4.20-4 golang-check.v1-dev_0.0+git20150729.11d3bc7-3 golang-codegangsta-cli-dev_0.0~git20150117-3 golang-context-dev_0.0~git20140604.1.14f550f-1 golang-dbus-dev_3-1 golang-dns-dev_0.0~git20151030.0.6a15566-1 golang-docker-dev_1.8.3~ds1-2 golang-github-agtorre-gocolorize-dev_1.0.0-1 golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1 golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1 golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1 golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1 golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1 golang-github-bitly-go-simplejson-dev_0.5.0-1 golang-github-bmizerany-assert-dev_0.0~git20120716-1 golang-github-boltdb-bolt-dev_1.1.0-1 golang-github-bradfitz-gomemcache-dev_0.0~git20141109-1 golang-github-bugsnag-bugsnag-go-dev_1.0.5+dfsg-1 golang-github-bugsnag-panicwrap-dev_0.0~git20141111-1 golang-github-codegangsta-cli-dev_0.0~git20150117-3 golang-github-coreos-go-systemd-dev_5-1 golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1 golang-github-docker-docker-dev_1.8.3~ds1-2 golang-github-docker-go-units-dev_0.3.0-1 golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1 golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-1 golang-github-garyburd-redigo-dev_0.0~git20150901.0.d8dbe4d-1 golang-github-getsentry-raven-go-dev_0.0~git20150721.0.74c334d-1 golang-github-go-fsnotify-fsnotify-dev_1.2.9-1 golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1 golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1 golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1 golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1 golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1 golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1 golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1 golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1 golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1 golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1 golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1 golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1 golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1 golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1 golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1 golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1 golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1 golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1 golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1 golang-github-hashicorp-serf-dev_0.7.0~ds1-1 golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1 golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1 golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1 golang-github-juju-loggo-dev_0.0~git20150527.0.8477fc9-1 golang-github-julienschmidt-httprouter-dev_1.1-1 golang-github-kardianos-osext-dev_0.0~git20151124.0.10da294-2 golang-github-mattn-go-isatty-dev_0.0.1-1 golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1 golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2 golang-github-opencontainers-runc-dev_0.0.8+dfsg-1 golang-github-opencontainers-specs-dev_0.0~git20150829.0.e9cb564-1 golang-github-prometheus-common-dev_0+git20160104.0a3005b-1 golang-github-revel-revel-dev_0.12.0+dfsg-1 golang-github-robfig-config-dev_0.0~git20141208-1 golang-github-robfig-pathtree-dev_0.0~git20140121-1 golang-github-ryanuber-columnize-dev_2.1.0-1 golang-github-sirupsen-logrus-dev_0.8.7-3 golang-github-stretchr-testify-dev_1.0-2 golang-github-stvp-go-udp-testing-dev_0.0~git20150316.0.abcd331-1 golang-github-tobi-airbrake-go-dev_0.0~git20150109-1 golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1 golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1 golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1 golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1 golang-go_2:1.6-1+rpi1 golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1 golang-golang-x-crypto-dev_1:0.0~git20151201.0.7b85b09-2 golang-golang-x-net-dev_1:0.0+git20160110.4fd4a9f-1 golang-golang-x-sys-dev_0.0~git20150612-1 golang-gopkg-mgo.v2-dev_2015.12.06-1 golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1 golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.11-1 golang-goprotobuf-dev_0.0~git20150526-2 golang-logrus-dev_0.8.7-3 golang-objx-dev_0.0~git20140527-4 golang-pretty-dev_0.0~git20130613-1 golang-procfs-dev_0+git20150616.c91d8ee-1 golang-prometheus-client-dev_0.7.0+ds-3 golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4 golang-src_2:1.6-1+rpi1 golang-text-dev_0.0~git20130502-1 golang-x-text-dev_0+git20151217.cf49866-1 gpgv_1.4.20-4 grep_2.22-1 groff-base_1.22.3-7 gzip_1.6-4 hostname_3.17 init_1.29 init-system-helpers_1.29 initscripts_2.88dsf-59.3 insserv_1.14.0-5.3 intltool-debian_0.35.0+20060710.4 klibc-utils_2.0.4-8+rpi1 kmod_22-1 libacl1_2.2.52-3 libapparmor1_2.10-3 libapt-pkg5.0_1.2.6 libarchive-zip-perl_1.56-2 libasan2_5.3.1-11 libatomic1_5.3.1-11 libattr1_1:2.4.47-2 libaudit-common_1:2.4.5-1 libaudit1_1:2.4.5-1 libblkid1_2.27.1-6 libbsd0_0.8.2-1 libbz2-1.0_1.0.6-8 libc-bin_2.22-3 libc-dev-bin_2.22-3 libc6_2.22-3 libc6-dev_2.22-3 libcap2_1:2.24-12 libcap2-bin_1:2.24-12 libcc1-0_5.3.1-11 libcomerr2_1.42.13-1 libcroco3_0.6.11-1 libcryptsetup4_2:1.7.0-2 libdb5.3_5.3.28-11 libdbus-1-3_1.10.8-1 libdebconfclient0_0.207 libdevmapper1.02.1_2:1.02.116-1 libdpkg-perl_1.18.4 libdrm2_2.4.67-1 libfakeroot_1.20.2-1 libfdisk1_2.27.1-6 libffi6_3.2.1-4 libfile-stripnondeterminism-perl_0.016-1 libgc1c2_1:7.4.2-7.3 libgcc-5-dev_5.3.1-11 libgcc1_1:5.3.1-11 libgcrypt20_1.6.5-2 libgdbm3_1.8.3-13.1 libglib2.0-0_2.46.2-3 libgmp10_2:6.1.0+dfsg-2 libgomp1_5.3.1-11 libgpg-error0_1.21-2 libicu55_55.1-7 libisl15_0.16.1-1 libklibc_2.0.4-8+rpi1 libkmod2_22-1 liblmdb-dev_0.9.17-3 liblmdb0_0.9.17-3 liblz4-1_0.0~r131-2 liblzma5_5.1.1alpha+20120614-2.1 libmagic1_1:5.25-2 libmount1_2.27.1-6 libmpc3_1.0.3-1 libmpfr4_3.1.4-1 libncurses5_6.0+20160213-1 libncursesw5_6.0+20160213-1 libpam-modules_1.1.8-3.2 libpam-modules-bin_1.1.8-3.2 libpam-runtime_1.1.8-3.2 libpam0g_1.1.8-3.2 libpcre3_2:8.38-3 libperl5.22_5.22.1-9 libpipeline1_1.4.1-2 libpng12-0_1.2.54-4 libprocps5_2:3.3.11-3 libprotobuf9v5_2.6.1-1.3 libprotoc9v5_2.6.1-1.3 libreadline6_6.3-8+b3 libsasl2-2_2.1.26.dfsg1-14+b1 libsasl2-dev_2.1.26.dfsg1-14+b1 libsasl2-modules-db_2.1.26.dfsg1-14+b1 libseccomp2_2.2.3-3 libselinux1_2.4-3 libsemanage-common_2.4-3 libsemanage1_2.4-3 libsepol1_2.4-2 libsmartcols1_2.27.1-6 libss2_1.42.13-1 libssl1.0.2_1.0.2g-1 libstdc++-5-dev_5.3.1-11 libstdc++6_5.3.1-11 libsystemd-dev_229-2 libsystemd0_229-2 libtext-charwidth-perl_0.04-7+b6 libtext-iconv-perl_1.7-5+b7 libtext-wrapi18n-perl_0.06-7.1 libtimedate-perl_2.3000-2 libtinfo5_6.0+20160213-1 libubsan0_5.3.1-11 libudev1_229-2 libunistring0_0.9.3-5.2 libusb-0.1-4_2:0.1.12-28 libustr-1.0-1_1.0.4-5 libuuid1_2.27.1-6 libxml2_2.9.3+dfsg1-1 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch login_1:4.2-3.1 lsb-base_9.20160110+rpi1 make_4.1-6 makedev_2.3.1-93 man-db_2.7.5-1 manpages_4.04-2 mawk_1.3.3-17 mount_2.27.1-6 multiarch-support_2.22-3 nano_2.5.3-2 ncurses-base_6.0+20160213-1 ncurses-bin_6.0+20160213-1 openssl_1.0.2g-1 passwd_1:4.2-3.1 patch_2.7.5-1 perl_5.22.1-9 perl-base_5.22.1-9 perl-modules-5.22_5.22.1-9 pkg-config_0.29-3 po-debconf_1.0.19 procps_2:3.3.11-3 protobuf-compiler_2.6.1-1.3 raspbian-archive-keyring_20120528.2 readline-common_6.3-8 sbuild-build-depends-consul-dummy_0.invalid.0 sbuild-build-depends-core-dummy_0.invalid.0 sed_4.2.2-7.1 sensible-utils_0.0.9 startpar_0.59-3 systemd_229-2 systemd-sysv_229-2 sysv-rc_2.88dsf-59.3 sysvinit-utils_2.88dsf-59.3 tar_1.28-2.1 tzdata_2016a-1 udev_229-2 util-linux_2.27.1-6 xz-utils_5.1.1alpha+20120614-2.1 zlib1g_1:1.2.8.dfsg-2+b1

+------------------------------------------------------------------------------+
| Build                                                                        |
+------------------------------------------------------------------------------+


Unpack source
-------------

gpgv: keyblock resource `/sbuild-nonexistent/.gnupg/trustedkeys.gpg': file open error
gpgv: Signature made Wed Mar 23 05:50:49 2016 UTC using RSA key ID 53968D1B
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./consul_0.6.3~dfsg-2.dsc
dpkg-source: info: extracting consul in consul-0.6.3~dfsg
dpkg-source: info: unpacking consul_0.6.3~dfsg.orig.tar.xz
dpkg-source: info: unpacking consul_0.6.3~dfsg-2.debian.tar.xz
dpkg-source: info: applying 0001-update-test-fixture-paths.patch

Check disc space
----------------

Sufficient free space for build

User Environment
----------------

DEB_BUILD_OPTIONS=parallel=4
HOME=/sbuild-nonexistent
LOGNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
SCHROOT_ALIAS_NAME=stretch-staging-armhf-sbuild
SCHROOT_CHROOT_NAME=stretch-staging-armhf-sbuild
SCHROOT_COMMAND=env
SCHROOT_GID=111
SCHROOT_GROUP=buildd
SCHROOT_SESSION_ID=stretch-staging-armhf-sbuild-898657f0-1da6-4a1e-b6b0-a19d6f41c511
SCHROOT_UID=106
SCHROOT_USER=buildd
SHELL=/bin/sh
TERM=xterm
USER=buildd

dpkg-buildpackage
-----------------

dpkg-buildpackage: source package consul
dpkg-buildpackage: source version 0.6.3~dfsg-2
dpkg-buildpackage: source distribution unstable
 dpkg-source --before-build consul-0.6.3~dfsg
dpkg-buildpackage: host architecture armhf
 fakeroot debian/rules clean
dh clean --buildsystem=golang --with=golang
   dh_testdir -O--buildsystem=golang
   dh_auto_clean -O--buildsystem=golang
   dh_clean -O--buildsystem=golang
 debian/rules build-arch
dh build-arch --buildsystem=golang --with=golang
   dh_testdir -a -O--buildsystem=golang
   dh_update_autotools_config -a -O--buildsystem=golang
   dh_auto_configure -a -O--buildsystem=golang
   dh_auto_build -a -O--buildsystem=golang
	go install -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
github.com/hashicorp/serf/coordinate
github.com/armon/circbuf
github.com/armon/go-metrics
github.com/hashicorp/go-cleanhttp
github.com/DataDog/datadog-go/statsd
github.com/armon/go-metrics/datadog
github.com/elazarl/go-bindata-assetfs
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts
github.com/hashicorp/consul/api
github.com/Sirupsen/logrus
github.com/docker/go-units
golang.org/x/net/context
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise
github.com/opencontainers/runc/libcontainer/user
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy
github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive
github.com/armon/go-radix
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir
github.com/hashicorp/golang-lru/simplelru
github.com/hashicorp/hcl/hcl/strconv
github.com/hashicorp/golang-lru
github.com/hashicorp/go-msgpack/codec
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/json/token
github.com/hashicorp/hcl/json/scanner
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/go-immutable-radix
github.com/hashicorp/hcl
github.com/hashicorp/go-memdb
github.com/hashicorp/consul/acl
github.com/fsouza/go-dockerclient
github.com/hashicorp/consul/tlsutil
github.com/hashicorp/errwrap
github.com/hashicorp/go-multierror
github.com/boltdb/bolt
github.com/hashicorp/yamux
github.com/hashicorp/consul/consul/structs
github.com/hashicorp/memberlist
github.com/hashicorp/consul/consul/state
github.com/hashicorp/net-rpc-msgpackrpc
github.com/hashicorp/raft
github.com/inconshreveable/muxado/proto/buffer
github.com/inconshreveable/muxado/proto/frame
github.com/hashicorp/serf/serf
github.com/hashicorp/consul/watch
github.com/hashicorp/go-checkpoint
github.com/inconshreveable/muxado/proto
golang.org/x/sys/unix
github.com/inconshreveable/muxado/proto/ext
github.com/inconshreveable/muxado
github.com/hashicorp/raft-boltdb
github.com/hashicorp/go-syslog
github.com/hashicorp/logutils
github.com/hashicorp/scada-client
github.com/hashicorp/consul/consul
github.com/hashicorp/go-reap
github.com/miekg/dns
github.com/bgentry/speakeasy
github.com/mattn/go-isatty
github.com/mitchellh/cli
github.com/mitchellh/mapstructure
github.com/ryanuber/columnize
github.com/hashicorp/consul/testutil
github.com/hashicorp/consul/command/agent
github.com/hashicorp/consul/command
github.com/hashicorp/consul
   debian/rules override_dh_auto_test
make[1]: Entering directory '/<<PKGBUILDDIR>>'
## TODO patch out tests that rely on network via -test.short or
## something (which doesn't appear to be used anywhere ATM, so might
## be amenable as a PR to upstream)
dh_auto_test
	go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul	0.070s
=== RUN   TestRootACL
--- PASS: TestRootACL (0.00s)
=== RUN   TestStaticACL
--- PASS: TestStaticACL (0.00s)
=== RUN   TestPolicyACL
--- PASS: TestPolicyACL (0.00s)
=== RUN   TestPolicyACL_Parent
--- PASS: TestPolicyACL_Parent (0.00s)
=== RUN   TestPolicyACL_Keyring
--- PASS: TestPolicyACL_Keyring (0.00s)
=== RUN   TestCache_GetPolicy
--- PASS: TestCache_GetPolicy (0.00s)
=== RUN   TestCache_GetACL
--- PASS: TestCache_GetACL (0.01s)
=== RUN   TestCache_ClearACL
--- PASS: TestCache_ClearACL (0.00s)
=== RUN   TestCache_Purge
--- PASS: TestCache_Purge (0.00s)
=== RUN   TestCache_GetACLPolicy
--- PASS: TestCache_GetACLPolicy (0.00s)
=== RUN   TestCache_GetACL_Parent
--- PASS: TestCache_GetACL_Parent (0.01s)
=== RUN   TestCache_GetACL_ParentCache
--- PASS: TestCache_GetACL_ParentCache (0.00s)
=== RUN   TestParse
--- PASS: TestParse (0.00s)
=== RUN   TestParse_JSON
--- PASS: TestParse_JSON (0.00s)
=== RUN   TestACLPolicy_badPolicy
--- PASS: TestACLPolicy_badPolicy (0.00s)
PASS
ok  	github.com/hashicorp/consul/acl	0.084s
=== RUN   TestACL_CreateDestroy
=== RUN   TestACL_CloneDestroy
=== RUN   TestACL_Info
=== RUN   TestACL_List
=== RUN   TestAgent_Self
=== RUN   TestAgent_Members
=== RUN   TestAgent_Services
=== RUN   TestAgent_Services_CheckPassing
=== RUN   TestAgent_Services_CheckBadStatus
=== RUN   TestAgent_ServiceAddress
=== RUN   TestAgent_Services_MultipleChecks
=== RUN   TestAgent_SetTTLStatus
=== RUN   TestAgent_Checks
=== RUN   TestAgent_CheckStartPassing
=== RUN   TestAgent_Checks_serviceBound
=== RUN   TestAgent_Checks_Docker
=== RUN   TestAgent_Join
=== RUN   TestAgent_ForceLeave
=== RUN   TestServiceMaintenance
=== RUN   TestNodeMaintenance
=== RUN   TestDefaultConfig_env
=== RUN   TestSetQueryOptions
=== RUN   TestSetWriteOptions
=== RUN   TestRequestToHTTP
=== RUN   TestParseQueryMeta
=== RUN   TestAPI_UnixSocket
=== RUN   TestAPI_durToMsec
--- PASS: TestAPI_durToMsec (0.00s)
=== RUN   TestAPI_IsServerError
--- PASS: TestAPI_IsServerError (0.00s)
=== RUN   TestCatalog_Datacenters
=== RUN   TestCatalog_Nodes
=== RUN   TestCatalog_Services
=== RUN   TestCatalog_Service
=== RUN   TestCatalog_Node
=== RUN   TestCatalog_Registration
=== RUN   TestCoordinate_Datacenters
=== RUN   TestCoordinate_Nodes
=== RUN   TestEvent_FireList
=== RUN   TestHealth_Node
=== RUN   TestHealth_Checks
=== RUN   TestHealth_Service
=== RUN   TestHealth_State
=== RUN   TestClientPutGetDelete
=== RUN   TestClient_List_DeleteRecurse
=== RUN   TestClient_DeleteCAS
=== RUN   TestClient_CAS
=== RUN   TestClient_WatchGet
=== RUN   TestClient_WatchList
=== RUN   TestClient_Keys_DeleteRecurse
=== RUN   TestClient_AcquireRelease
=== RUN   TestLock_LockUnlock
=== RUN   TestLock_ForceInvalidate
=== RUN   TestLock_DeleteKey
=== RUN   TestLock_Contend
=== RUN   TestLock_Destroy
=== RUN   TestLock_Conflict
=== RUN   TestLock_ReclaimLock
=== RUN   TestLock_MonitorRetry
=== RUN   TestLock_OneShot
=== RUN   TestPreparedQuery
=== RUN   TestSemaphore_AcquireRelease
=== RUN   TestSemaphore_ForceInvalidate
=== RUN   TestSemaphore_DeleteKey
=== RUN   TestSemaphore_Contend
=== RUN   TestSemaphore_BadLimit
=== RUN   TestSemaphore_Destroy
=== RUN   TestSemaphore_Conflict
=== RUN   TestSemaphore_MonitorRetry
=== RUN   TestSemaphore_OneShot
=== RUN   TestSession_CreateDestroy
=== RUN   TestSession_CreateRenewDestroy
=== RUN   TestSession_CreateRenewDestroyRenew
=== RUN   TestSession_CreateDestroyRenewPeriodic
=== RUN   TestSession_Info
=== RUN   TestSession_Node
=== RUN   TestSession_List
=== RUN   TestStatusLeader
=== RUN   TestStatusPeers
--- SKIP: TestACL_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Self (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Members (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ServiceAddress (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_MultipleChecks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_SetTTLStatus (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_CheckStartPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks_serviceBound (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CloneDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Join (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ForceLeave (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestServiceMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestNodeMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- PASS: TestDefaultConfig_env (0.00s)
--- SKIP: TestSetQueryOptions (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSetWriteOptions (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestRequestToHTTP (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- PASS: TestParseQueryMeta (0.00s)
--- SKIP: TestAgent_Checks_Docker (0.01s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Registration (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestEvent_FireList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAPI_UnixSocket (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckBadStatus (0.02s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_List_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_DeleteCAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_CAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchGet (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_Keys_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_LockUnlock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ReclaimLock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_OneShot (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_AcquireRelease (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_BadLimit (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_State (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroyRenew (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroyRenewPeriodic (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusLeader (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusPeers (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClientPutGetDelete (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_OneShot (0.01s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_AcquireRelease (0.02s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestPreparedQuery (0.01s)
	server.go:143: consul not found on $PATH, skipping
PASS
ok  	github.com/hashicorp/consul/api	0.105s
# github.com/hashicorp/consul/command/agent
src/github.com/hashicorp/consul/command/agent/dns_test.go:1762: undefined: dns.ErrTruncated
=== RUN   TestConfigTestCommand_implements
--- PASS: TestConfigTestCommand_implements (0.00s)
=== RUN   TestConfigTestCommandFailOnEmptyFile
--- PASS: TestConfigTestCommandFailOnEmptyFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnEmptyDir
--- PASS: TestConfigTestCommandSucceedOnEmptyDir (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigFile
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigDir
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigDir (0.00s)
=== RUN   TestEventCommand_implements
--- PASS: TestEventCommand_implements (0.00s)
=== RUN   TestEventCommandRun
2016/03/28 05:53:06 [DEBUG] http: Request GET /v1/agent/self (3.846ms) from=127.0.0.1:60970
2016/03/28 05:53:06 [DEBUG] http: Request PUT /v1/event/fire/cmd (1.265334ms) from=127.0.0.1:60971
2016/03/28 05:53:06 [DEBUG] http: Shutting down http server (127.0.0.1:10411)
--- PASS: TestEventCommandRun (1.20s)
=== RUN   TestExecCommand_implements
--- PASS: TestExecCommand_implements (0.00s)
=== RUN   TestExecCommandRun
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52968
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (358.333µs) from=127.0.0.1:52968
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52969
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (357µs) from=127.0.0.1:52969
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52970
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (328µs) from=127.0.0.1:52970
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52971
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (333.334µs) from=127.0.0.1:52971
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52972
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:52972
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52973
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:52973
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52974
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (318.667µs) from=127.0.0.1:52974
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52975
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (343.333µs) from=127.0.0.1:52975
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52976
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:52976
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52977
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (286.334µs) from=127.0.0.1:52977
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52978
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (351.334µs) from=127.0.0.1:52978
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52979
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:52979
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52980
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:52980
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52981
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (322µs) from=127.0.0.1:52981
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52982
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:52982
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52983
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:52983
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52984
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:52984
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52985
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:52985
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52986
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:52986
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52987
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (340.334µs) from=127.0.0.1:52987
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52988
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (290.334µs) from=127.0.0.1:52988
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52989
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:52989
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52990
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (281.333µs) from=127.0.0.1:52990
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52991
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (329µs) from=127.0.0.1:52991
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52992
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:52992
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52993
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:52993
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52994
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (341µs) from=127.0.0.1:52994
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52995
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:52995
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52996
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (293.334µs) from=127.0.0.1:52996
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52997
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (350µs) from=127.0.0.1:52997
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52998
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:52998
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52999
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:52999
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53000
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:53000
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53001
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:53001
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53002
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (344µs) from=127.0.0.1:53002
2016/03/28 05:53:07 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53003
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:53003
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (470.667µs) from=127.0.0.1:53004
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (249.334µs) from=127.0.0.1:53005
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (265.333µs) from=127.0.0.1:53006
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:53007
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:53008
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (250.666µs) from=127.0.0.1:53009
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (330.667µs) from=127.0.0.1:53010
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (330.666µs) from=127.0.0.1:53011
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (254.666µs) from=127.0.0.1:53012
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:53013
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:53014
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:53015
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:53016
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (279.667µs) from=127.0.0.1:53017
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:53018
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:53019
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (277.666µs) from=127.0.0.1:53020
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:53021
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:53022
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:53023
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:53024
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (259.334µs) from=127.0.0.1:53025
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:53026
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:53027
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (264.334µs) from=127.0.0.1:53028
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (249µs) from=127.0.0.1:53029
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:53030
2016/03/28 05:53:07 [DEBUG] http: Request GET /v1/catalog/nodes (253.667µs) from=127.0.0.1:53031
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (308.667µs) from=127.0.0.1:53032
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (277.333µs) from=127.0.0.1:53033
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:53034
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (261µs) from=127.0.0.1:53035
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:53036
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:53037
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/catalog/nodes (350.334µs) from=127.0.0.1:53038
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/agent/self (668.333µs) from=127.0.0.1:53039
2016/03/28 05:53:08 [DEBUG] http: Request PUT /v1/session/create (199.041333ms) from=127.0.0.1:53040
2016/03/28 05:53:08 [DEBUG] http: Request PUT /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/job?acquire=b81c8d87-0160-1157-3cff-72e650cea7cc (228.667ms) from=127.0.0.1:53041
2016/03/28 05:53:08 [DEBUG] http: Request PUT /v1/event/fire/_rexec (828µs) from=127.0.0.1:53042
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/?keys=&wait=400ms (687.667µs) from=127.0.0.1:53043
2016/03/28 05:53:08 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/?index=5&keys=&wait=400ms (187.066ms) from=127.0.0.1:53044
2016/03/28 05:53:09 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/?index=6&keys=&wait=400ms (189.721ms) from=127.0.0.1:53045
2016/03/28 05:53:09 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/Node%202/out/00000 (864.333µs) from=127.0.0.1:53046
2016/03/28 05:53:09 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/?index=7&keys=&wait=400ms (135.279ms) from=127.0.0.1:53047
2016/03/28 05:53:09 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/Node%202/exit (714.666µs) from=127.0.0.1:53048
2016/03/28 05:53:09 [DEBUG] http: Request GET /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc/?index=8&keys=&wait=400ms (412.901333ms) from=127.0.0.1:53049
2016/03/28 05:53:09 [DEBUG] http: Request PUT /v1/session/destroy/b81c8d87-0160-1157-3cff-72e650cea7cc (246.189334ms) from=127.0.0.1:53050
2016/03/28 05:53:10 [DEBUG] http: Request DELETE /v1/kv/_rexec/b81c8d87-0160-1157-3cff-72e650cea7cc?recurse= (173.15ms) from=127.0.0.1:53051
2016/03/28 05:53:10 [DEBUG] http: Request PUT /v1/session/destroy/b81c8d87-0160-1157-3cff-72e650cea7cc (164.255ms) from=127.0.0.1:53052
2016/03/28 05:53:10 [DEBUG] http: Shutting down http server (127.0.0.1:10421)
--- PASS: TestExecCommandRun (3.71s)
=== RUN   TestExecCommandRun_CrossDC
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:59913
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (334.333µs) from=127.0.0.1:59914
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (281.666µs) from=127.0.0.1:59915
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:59916
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:59917
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:59918
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:59919
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:59920
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (379.666µs) from=127.0.0.1:59921
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:59922
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:59923
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:59924
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (325µs) from=127.0.0.1:59925
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (299.334µs) from=127.0.0.1:59926
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (299µs) from=127.0.0.1:59927
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:59928
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (412.333µs) from=127.0.0.1:59929
2016/03/28 05:53:11 [DEBUG] http: Request GET /v1/catalog/nodes (488.333µs) from=127.0.0.1:59930
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (281.666µs) from=127.0.0.1:59931
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:59932
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:59933
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (317.334µs) from=127.0.0.1:59934
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:59935
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (338.334µs) from=127.0.0.1:59936
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (513.666µs) from=127.0.0.1:59937
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:59938
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:59939
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (473µs) from=127.0.0.1:59940
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:59941
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (1.160333ms) from=127.0.0.1:59942
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (297.333µs) from=127.0.0.1:59943
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:59944
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:59945
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (323.333µs) from=127.0.0.1:59946
2016/03/28 05:53:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46649
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:46649
2016/03/28 05:53:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46650
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (319.334µs) from=127.0.0.1:46650
2016/03/28 05:53:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46651
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (318.667µs) from=127.0.0.1:46651
2016/03/28 05:53:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46652
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (336.333µs) from=127.0.0.1:46652
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (253.667µs) from=127.0.0.1:46653
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (258.333µs) from=127.0.0.1:46654
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (354µs) from=127.0.0.1:46655
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:46656
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:46657
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:46658
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:46659
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (259.333µs) from=127.0.0.1:46660
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:46661
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (406.667µs) from=127.0.0.1:46662
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (260.333µs) from=127.0.0.1:46663
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:46664
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:46665
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:46666
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (255.334µs) from=127.0.0.1:46667
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:46668
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (257.667µs) from=127.0.0.1:46669
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (266µs) from=127.0.0.1:46670
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (272.667µs) from=127.0.0.1:46671
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:46672
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (261.667µs) from=127.0.0.1:46673
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (262.666µs) from=127.0.0.1:46674
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (261.334µs) from=127.0.0.1:46675
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (263.333µs) from=127.0.0.1:46676
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:46677
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:46678
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:46679
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:46680
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:46681
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (262.667µs) from=127.0.0.1:46682
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:46683
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (274.666µs) from=127.0.0.1:46684
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:46685
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:46686
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:46687
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (297.333µs) from=127.0.0.1:46688
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:46689
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:46690
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (267.666µs) from=127.0.0.1:46691
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (253µs) from=127.0.0.1:46692
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (260.667µs) from=127.0.0.1:46693
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:46694
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (275.666µs) from=127.0.0.1:46695
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (260.667µs) from=127.0.0.1:46696
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (251.667µs) from=127.0.0.1:46697
2016/03/28 05:53:12 [DEBUG] http: Request GET /v1/catalog/nodes (249.666µs) from=127.0.0.1:46698
2016/03/28 05:53:13 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:46699
2016/03/28 05:53:13 [DEBUG] http: Request GET /v1/agent/self?dc=dc2 (658.333µs) from=127.0.0.1:59998
2016/03/28 05:53:13 [DEBUG] http: Request GET /v1/health/service/consul?dc=dc2&passing=1 (4.756667ms) from=127.0.0.1:59999
2016/03/28 05:53:13 [DEBUG] http: Request PUT /v1/session/create?dc=dc2 (213.962333ms) from=127.0.0.1:60001
2016/03/28 05:53:13 [DEBUG] http: Request PUT /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/job?acquire=0738064b-9bbc-292c-4864-548eac3624ce&dc=dc2 (463.692334ms) from=127.0.0.1:60003
2016/03/28 05:53:13 [DEBUG] http: Request PUT /v1/event/fire/_rexec?dc=dc2 (3.845333ms) from=127.0.0.1:60004
2016/03/28 05:53:13 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/?dc=dc2&keys=&wait=400ms (3.058667ms) from=127.0.0.1:60005
2016/03/28 05:53:14 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/?dc=dc2&index=5&keys=&wait=400ms (274.642667ms) from=127.0.0.1:60006
2016/03/28 05:53:14 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/?dc=dc2&index=6&keys=&wait=400ms (291.913667ms) from=127.0.0.1:60007
2016/03/28 05:53:14 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/Node%204/out/00000?dc=dc2 (3.691ms) from=127.0.0.1:60008
2016/03/28 05:53:14 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/?dc=dc2&index=7&keys=&wait=400ms (146.412667ms) from=127.0.0.1:60009
2016/03/28 05:53:14 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/Node%204/exit?dc=dc2 (3.004666ms) from=127.0.0.1:60010
2016/03/28 05:53:15 [DEBUG] http: Request GET /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce/?dc=dc2&index=8&keys=&wait=400ms (425.797ms) from=127.0.0.1:60011
2016/03/28 05:53:15 [DEBUG] http: Request PUT /v1/session/destroy/0738064b-9bbc-292c-4864-548eac3624ce?dc=dc2 (277.183667ms) from=127.0.0.1:60012
2016/03/28 05:53:15 [DEBUG] http: Request DELETE /v1/kv/_rexec/0738064b-9bbc-292c-4864-548eac3624ce?dc=dc2&recurse= (307.401333ms) from=127.0.0.1:60013
2016/03/28 05:53:15 [DEBUG] http: Request PUT /v1/session/destroy/0738064b-9bbc-292c-4864-548eac3624ce?dc=dc2 (227.616333ms) from=127.0.0.1:60014
2016/03/28 05:53:16 [DEBUG] http: Shutting down http server (127.0.0.1:10441)
2016/03/28 05:53:16 [DEBUG] http: Shutting down http server (127.0.0.1:10431)
--- PASS: TestExecCommandRun_CrossDC (5.91s)
=== RUN   TestExecCommand_Validate
--- PASS: TestExecCommand_Validate (0.00s)
=== RUN   TestExecCommand_Sessions
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34566
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (311.333µs) from=127.0.0.1:34566
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34567
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:34567
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34568
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:34568
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34569
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:34569
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34570
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:34570
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34571
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (355.667µs) from=127.0.0.1:34571
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34572
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (272.667µs) from=127.0.0.1:34572
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34573
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:34573
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34574
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (339.666µs) from=127.0.0.1:34574
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34575
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:34575
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34576
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (261µs) from=127.0.0.1:34576
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34577
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (315.334µs) from=127.0.0.1:34577
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34578
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:34578
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34579
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:34579
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34580
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:34580
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34581
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (515.333µs) from=127.0.0.1:34581
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34582
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (399.667µs) from=127.0.0.1:34582
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34583
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (422.666µs) from=127.0.0.1:34583
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34584
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (373µs) from=127.0.0.1:34584
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34585
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (274.666µs) from=127.0.0.1:34585
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34586
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:34586
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34587
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (407.333µs) from=127.0.0.1:34587
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34588
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (274.333µs) from=127.0.0.1:34588
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34589
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:34589
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34590
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (345µs) from=127.0.0.1:34590
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34591
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:34591
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34592
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:34592
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34593
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (332.333µs) from=127.0.0.1:34593
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34594
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:34594
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34595
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:34595
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34596
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (267.666µs) from=127.0.0.1:34596
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34597
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:34597
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34598
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (284.666µs) from=127.0.0.1:34598
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34599
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:34599
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34600
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:34600
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34601
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:34601
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34602
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:34602
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34603
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (304.666µs) from=127.0.0.1:34603
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34604
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (298.667µs) from=127.0.0.1:34604
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34605
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:34605
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34606
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (333.333µs) from=127.0.0.1:34606
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34607
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (323.667µs) from=127.0.0.1:34607
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34608
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (353.334µs) from=127.0.0.1:34608
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34609
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (374.667µs) from=127.0.0.1:34609
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34610
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (623.666µs) from=127.0.0.1:34610
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34611
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (341.333µs) from=127.0.0.1:34611
2016/03/28 05:53:17 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34612
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (287.334µs) from=127.0.0.1:34612
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:34613
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:34614
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:34615
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (267µs) from=127.0.0.1:34616
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (253µs) from=127.0.0.1:34617
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (255.667µs) from=127.0.0.1:34618
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (254.667µs) from=127.0.0.1:34619
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (287.667µs) from=127.0.0.1:34620
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:34621
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (487µs) from=127.0.0.1:34622
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:34623
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:34624
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (309.666µs) from=127.0.0.1:34625
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:34626
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:34627
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (265µs) from=127.0.0.1:34628
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:34629
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:34630
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (265.666µs) from=127.0.0.1:34631
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:34632
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (424µs) from=127.0.0.1:34633
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (1.014333ms) from=127.0.0.1:34634
2016/03/28 05:53:17 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:34635
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:34636
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (278.334µs) from=127.0.0.1:34637
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:34638
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (318.333µs) from=127.0.0.1:34639
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (299µs) from=127.0.0.1:34640
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:34641
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (285.333µs) from=127.0.0.1:34642
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:34643
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:34644
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:34645
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:34646
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (259.334µs) from=127.0.0.1:34647
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (384µs) from=127.0.0.1:34648
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (289.667µs) from=127.0.0.1:34649
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:34650
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (314.667µs) from=127.0.0.1:34651
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:34652
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (260.666µs) from=127.0.0.1:34653
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (239.666µs) from=127.0.0.1:34654
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:34655
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:34656
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (245µs) from=127.0.0.1:34657
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (244.333µs) from=127.0.0.1:34658
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (245.666µs) from=127.0.0.1:34659
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (245µs) from=127.0.0.1:34660
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:34661
2016/03/28 05:53:18 [DEBUG] http: Request PUT /v1/session/create (194.673001ms) from=127.0.0.1:34662
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/session/info/99c894de-4dfc-6759-fd3d-77e379a3b710 (746.333µs) from=127.0.0.1:34663
2016/03/28 05:53:18 [DEBUG] http: Request PUT /v1/session/destroy/99c894de-4dfc-6759-fd3d-77e379a3b710 (149.384333ms) from=127.0.0.1:34664
2016/03/28 05:53:18 [DEBUG] http: Request GET /v1/session/info/99c894de-4dfc-6759-fd3d-77e379a3b710 (260.333µs) from=127.0.0.1:34665
2016/03/28 05:53:18 [DEBUG] http: Shutting down http server (127.0.0.1:10451)
--- PASS: TestExecCommand_Sessions (2.53s)
=== RUN   TestExecCommand_Sessions_Foreign
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36483
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:36483
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36484
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (259.666µs) from=127.0.0.1:36484
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36485
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:36485
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36486
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:36486
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36487
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:36487
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36488
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:36488
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36489
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:36489
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36490
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (336.666µs) from=127.0.0.1:36490
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36491
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (293.667µs) from=127.0.0.1:36491
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36492
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (279.667µs) from=127.0.0.1:36492
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36493
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (720.667µs) from=127.0.0.1:36493
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36494
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (601.667µs) from=127.0.0.1:36494
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36495
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (729.667µs) from=127.0.0.1:36495
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36496
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (636.333µs) from=127.0.0.1:36496
2016/03/28 05:53:19 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36497
2016/03/28 05:53:19 [DEBUG] http: Request GET /v1/catalog/nodes (301µs) from=127.0.0.1:36497
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36498
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (366µs) from=127.0.0.1:36498
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36499
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (289.667µs) from=127.0.0.1:36499
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36500
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (726µs) from=127.0.0.1:36500
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36501
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (724.666µs) from=127.0.0.1:36501
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36502
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:36502
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36503
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (472.667µs) from=127.0.0.1:36503
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36504
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:36504
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36505
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (413.667µs) from=127.0.0.1:36505
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36506
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (467.667µs) from=127.0.0.1:36506
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36507
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (612.333µs) from=127.0.0.1:36507
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36508
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (579.666µs) from=127.0.0.1:36508
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36509
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (298.333µs) from=127.0.0.1:36509
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36510
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (1.536667ms) from=127.0.0.1:36510
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36511
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (477µs) from=127.0.0.1:36511
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36512
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (282.666µs) from=127.0.0.1:36512
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36513
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (308.334µs) from=127.0.0.1:36513
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36514
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:36514
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36515
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (322µs) from=127.0.0.1:36515
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36516
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (300.333µs) from=127.0.0.1:36516
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36517
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:36517
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36518
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (332.333µs) from=127.0.0.1:36518
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36519
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:36519
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36520
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (277.667µs) from=127.0.0.1:36520
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36521
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:36521
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36522
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (410.667µs) from=127.0.0.1:36522
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36523
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (308.334µs) from=127.0.0.1:36523
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36524
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (319.333µs) from=127.0.0.1:36524
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36525
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:36525
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36526
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (380.666µs) from=127.0.0.1:36526
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36527
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (442.334µs) from=127.0.0.1:36527
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36528
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (455.667µs) from=127.0.0.1:36528
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36529
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (395.333µs) from=127.0.0.1:36529
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36530
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (486.667µs) from=127.0.0.1:36530
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36531
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:36531
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36532
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:36532
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36533
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:36533
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36534
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:36534
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36535
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:36535
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36536
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (300.667µs) from=127.0.0.1:36536
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36537
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:36537
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36538
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:36538
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36539
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:36539
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36540
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (323.333µs) from=127.0.0.1:36540
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36541
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:36541
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36542
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:36542
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36543
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (271.666µs) from=127.0.0.1:36543
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36544
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (287.667µs) from=127.0.0.1:36544
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36545
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (289.666µs) from=127.0.0.1:36545
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36546
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:36546
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36547
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:36547
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36548
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (284.666µs) from=127.0.0.1:36548
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36549
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (314.666µs) from=127.0.0.1:36549
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36550
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (476.666µs) from=127.0.0.1:36550
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36551
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:36551
2016/03/28 05:53:20 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36552
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (343µs) from=127.0.0.1:36552
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:36553
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:36554
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (415µs) from=127.0.0.1:36555
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (303.666µs) from=127.0.0.1:36556
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (428µs) from=127.0.0.1:36557
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (481µs) from=127.0.0.1:36558
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (289.333µs) from=127.0.0.1:36559
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (481.667µs) from=127.0.0.1:36560
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (523µs) from=127.0.0.1:36561
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (592.333µs) from=127.0.0.1:36562
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (492.666µs) from=127.0.0.1:36563
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (483µs) from=127.0.0.1:36564
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (478µs) from=127.0.0.1:36565
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:36566
2016/03/28 05:53:20 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:36567
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (441.333µs) from=127.0.0.1:36568
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:36569
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:36570
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:36571
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:36572
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (282.666µs) from=127.0.0.1:36573
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:36574
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:36575
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:36576
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (286.666µs) from=127.0.0.1:36577
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:36578
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:36579
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (259.667µs) from=127.0.0.1:36580
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:36581
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:36582
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (279.666µs) from=127.0.0.1:36583
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:36584
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (312.666µs) from=127.0.0.1:36585
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (266.333µs) from=127.0.0.1:36586
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (266µs) from=127.0.0.1:36587
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:36588
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:36589
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:36590
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:36591
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:36592
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:36593
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:36594
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:36595
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (258.333µs) from=127.0.0.1:36596
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (263.333µs) from=127.0.0.1:36597
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:36598
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (295.666µs) from=127.0.0.1:36599
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:36600
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (300.334µs) from=127.0.0.1:36601
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/catalog/nodes (320µs) from=127.0.0.1:36603
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/health/service/consul?passing=1 (837.666µs) from=127.0.0.1:36604
2016/03/28 05:53:21 [DEBUG] http: Request PUT /v1/session/create (240.435333ms) from=127.0.0.1:36605
2016/03/28 05:53:21 [DEBUG] http: Request GET /v1/session/info/2775491d-208f-b8a3-d99c-21d78acfc2bd (337.667µs) from=127.0.0.1:36606
2016/03/28 05:53:22 [DEBUG] http: Request PUT /v1/session/destroy/2775491d-208f-b8a3-d99c-21d78acfc2bd (259.15ms) from=127.0.0.1:36607
2016/03/28 05:53:22 [DEBUG] http: Request GET /v1/session/info/2775491d-208f-b8a3-d99c-21d78acfc2bd (275.667µs) from=127.0.0.1:36608
2016/03/28 05:53:22 [DEBUG] http: Shutting down http server (127.0.0.1:10461)
--- PASS: TestExecCommand_Sessions_Foreign (3.44s)
=== RUN   TestExecCommand_UploadDestroy
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53712
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (710.333µs) from=127.0.0.1:53712
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53713
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:53713
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53714
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (330µs) from=127.0.0.1:53714
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53715
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (529µs) from=127.0.0.1:53715
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53716
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (459µs) from=127.0.0.1:53716
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53717
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:53717
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53718
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:53718
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53719
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (284.334µs) from=127.0.0.1:53719
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53720
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (276.334µs) from=127.0.0.1:53720
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53721
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:53721
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53722
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:53722
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53723
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (291.667µs) from=127.0.0.1:53723
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53724
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:53724
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53725
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (818µs) from=127.0.0.1:53725
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53726
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (284.666µs) from=127.0.0.1:53726
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53727
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:53727
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53728
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (298.333µs) from=127.0.0.1:53728
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53729
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (315.667µs) from=127.0.0.1:53729
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53730
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (307.334µs) from=127.0.0.1:53730
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53731
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:53731
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53732
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:53732
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53733
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (291.334µs) from=127.0.0.1:53733
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53734
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (339.333µs) from=127.0.0.1:53734
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53735
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:53735
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53736
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (285.333µs) from=127.0.0.1:53736
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53737
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:53737
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53738
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:53738
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53739
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:53739
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53740
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (477.667µs) from=127.0.0.1:53740
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53741
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (269.334µs) from=127.0.0.1:53741
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53742
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:53742
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53743
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (352.667µs) from=127.0.0.1:53743
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53744
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (358.667µs) from=127.0.0.1:53744
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53745
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:53745
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53746
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:53746
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53747
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (266.334µs) from=127.0.0.1:53747
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53748
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:53748
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53749
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:53749
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53750
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (265.334µs) from=127.0.0.1:53750
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53751
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:53751
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53752
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (293.334µs) from=127.0.0.1:53752
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53753
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (780.333µs) from=127.0.0.1:53753
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53754
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:53754
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53755
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (589.667µs) from=127.0.0.1:53755
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53756
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (383µs) from=127.0.0.1:53756
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53757
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (432µs) from=127.0.0.1:53757
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53758
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (484.333µs) from=127.0.0.1:53758
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53759
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (363.666µs) from=127.0.0.1:53759
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53760
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:53760
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53761
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:53761
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53762
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:53762
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53763
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:53763
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53764
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (280.666µs) from=127.0.0.1:53764
2016/03/28 05:53:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53765
2016/03/28 05:53:23 [DEBUG] http: Request GET /v1/catalog/nodes (283.667µs) from=127.0.0.1:53765
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53766
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:53766
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53767
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (287.667µs) from=127.0.0.1:53767
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53768
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:53768
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53769
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:53769
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53770
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:53770
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53771
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (284.667µs) from=127.0.0.1:53771
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53772
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (272.666µs) from=127.0.0.1:53772
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53773
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (276.666µs) from=127.0.0.1:53773
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53774
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:53774
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53775
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:53775
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53776
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:53776
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53777
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (271.666µs) from=127.0.0.1:53777
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53778
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:53778
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53779
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:53779
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53780
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:53780
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53781
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (327.667µs) from=127.0.0.1:53781
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53782
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:53782
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53783
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (285.333µs) from=127.0.0.1:53783
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53784
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (309.667µs) from=127.0.0.1:53784
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53785
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (336.667µs) from=127.0.0.1:53785
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53786
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (270.334µs) from=127.0.0.1:53786
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53787
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:53787
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53788
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:53788
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53789
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:53789
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53790
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:53790
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53791
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (345µs) from=127.0.0.1:53791
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53792
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (279.667µs) from=127.0.0.1:53792
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53793
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:53793
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53794
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:53794
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53795
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (336.333µs) from=127.0.0.1:53795
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53796
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (280.666µs) from=127.0.0.1:53796
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53797
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:53797
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53798
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (312.334µs) from=127.0.0.1:53798
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53799
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (335.333µs) from=127.0.0.1:53799
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53800
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:53800
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53801
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:53801
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53802
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (279.667µs) from=127.0.0.1:53802
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53803
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:53803
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53804
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:53804
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53805
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (272.667µs) from=127.0.0.1:53805
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53806
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:53806
2016/03/28 05:53:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53807
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:53807
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (262µs) from=127.0.0.1:53808
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:53809
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (260.333µs) from=127.0.0.1:53810
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (266.666µs) from=127.0.0.1:53811
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (266.666µs) from=127.0.0.1:53812
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (458µs) from=127.0.0.1:53813
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (441.333µs) from=127.0.0.1:53814
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (391.333µs) from=127.0.0.1:53815
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:53816
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:53817
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (900.334µs) from=127.0.0.1:53818
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (260.667µs) from=127.0.0.1:53819
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (254.667µs) from=127.0.0.1:53820
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (245.333µs) from=127.0.0.1:53821
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:53822
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:53823
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (253µs) from=127.0.0.1:53824
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (330.333µs) from=127.0.0.1:53825
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (323.667µs) from=127.0.0.1:53826
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (346.333µs) from=127.0.0.1:53827
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (315.333µs) from=127.0.0.1:53828
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (351.333µs) from=127.0.0.1:53829
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (301µs) from=127.0.0.1:53830
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:53831
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (325.667µs) from=127.0.0.1:53832
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (261.667µs) from=127.0.0.1:53833
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:53834
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (252.667µs) from=127.0.0.1:53835
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:53836
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (262.333µs) from=127.0.0.1:53837
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (354.666µs) from=127.0.0.1:53838
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (259.666µs) from=127.0.0.1:53839
2016/03/28 05:53:24 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:53840
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:53841
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:53842
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (253µs) from=127.0.0.1:53843
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (268.334µs) from=127.0.0.1:53844
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (331.333µs) from=127.0.0.1:53845
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:53846
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:53847
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:53848
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (262.667µs) from=127.0.0.1:53849
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (244.333µs) from=127.0.0.1:53850
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (242.666µs) from=127.0.0.1:53851
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:53852
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (248.667µs) from=127.0.0.1:53853
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (249.667µs) from=127.0.0.1:53854
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (255.666µs) from=127.0.0.1:53855
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:53856
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (295.666µs) from=127.0.0.1:53857
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (312.333µs) from=127.0.0.1:53858
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (267µs) from=127.0.0.1:53859
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (573.666µs) from=127.0.0.1:53860
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:53861
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:53862
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:53863
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (255.333µs) from=127.0.0.1:53864
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (314.667µs) from=127.0.0.1:53865
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (249µs) from=127.0.0.1:53866
2016/03/28 05:53:25 [DEBUG] http: Request GET /v1/catalog/nodes (293.666µs) from=127.0.0.1:53867
2016/03/28 05:53:25 [DEBUG] http: Request PUT /v1/session/create (316.876ms) from=127.0.0.1:53868
2016/03/28 05:53:26 [DEBUG] http: Request PUT /v1/kv/_rexec/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c/job?acquire=fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c (354.743334ms) from=127.0.0.1:53869
2016/03/28 05:53:26 [DEBUG] http: Request GET /v1/kv/_rexec/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c/job (670.333µs) from=127.0.0.1:53870
2016/03/28 05:53:26 [DEBUG] http: Request DELETE /v1/kv/_rexec/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c?recurse= (433.884667ms) from=127.0.0.1:53871
2016/03/28 05:53:26 [DEBUG] http: Request GET /v1/kv/_rexec/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c/job (449µs) from=127.0.0.1:53873
2016/03/28 05:53:26 [DEBUG] http: Shutting down http server (127.0.0.1:10471)
--- PASS: TestExecCommand_UploadDestroy (4.60s)
=== RUN   TestExecCommand_StreamResults
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50201
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (340.334µs) from=127.0.0.1:50201
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50202
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:50202
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50203
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (291.666µs) from=127.0.0.1:50203
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50204
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:50204
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50205
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (267.334µs) from=127.0.0.1:50205
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50206
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:50206
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50207
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (291.333µs) from=127.0.0.1:50207
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50208
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (316.333µs) from=127.0.0.1:50208
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50209
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (353.333µs) from=127.0.0.1:50209
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50210
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (296.666µs) from=127.0.0.1:50210
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50211
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (275.667µs) from=127.0.0.1:50211
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50212
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:50212
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50213
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (311.333µs) from=127.0.0.1:50213
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50214
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:50214
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50215
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (523.667µs) from=127.0.0.1:50215
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50216
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (478.334µs) from=127.0.0.1:50216
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50217
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (325.333µs) from=127.0.0.1:50217
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50218
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (305µs) from=127.0.0.1:50218
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50219
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (394.667µs) from=127.0.0.1:50219
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50220
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:50220
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50221
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:50221
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50222
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (320µs) from=127.0.0.1:50222
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50223
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:50223
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50224
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:50224
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50225
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (429.333µs) from=127.0.0.1:50225
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50226
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (434.334µs) from=127.0.0.1:50226
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50227
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (371µs) from=127.0.0.1:50227
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50228
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (289.666µs) from=127.0.0.1:50228
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50229
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (271.334µs) from=127.0.0.1:50229
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50230
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (433µs) from=127.0.0.1:50230
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50231
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:50231
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50232
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (285.666µs) from=127.0.0.1:50232
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50233
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (275.667µs) from=127.0.0.1:50233
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50234
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (292.667µs) from=127.0.0.1:50234
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50235
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:50235
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50236
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:50236
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50237
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:50237
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50238
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (262.666µs) from=127.0.0.1:50238
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50239
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:50239
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50240
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (359.666µs) from=127.0.0.1:50240
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50241
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (293.334µs) from=127.0.0.1:50241
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50242
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:50242
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50243
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (270.666µs) from=127.0.0.1:50243
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50244
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:50244
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50245
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:50245
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50246
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:50246
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50247
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (262.666µs) from=127.0.0.1:50247
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50248
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (595.667µs) from=127.0.0.1:50248
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50249
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (288µs) from=127.0.0.1:50249
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50250
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:50250
2016/03/28 05:53:28 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:50251
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (708.333µs) from=127.0.0.1:50251
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (249.333µs) from=127.0.0.1:50252
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (372.334µs) from=127.0.0.1:50253
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:50254
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (257µs) from=127.0.0.1:50255
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (271.334µs) from=127.0.0.1:50256
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:50257
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:50258
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:50259
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (321µs) from=127.0.0.1:50260
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:50261
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:50262
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:50263
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (495.667µs) from=127.0.0.1:50264
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (303.333µs) from=127.0.0.1:50265
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:50266
2016/03/28 05:53:28 [DEBUG] http: Request GET /v1/catalog/nodes (257.666µs) from=127.0.0.1:50267
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:50268
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:50269
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (334.666µs) from=127.0.0.1:50270
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (251µs) from=127.0.0.1:50271
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:50272
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (398.667µs) from=127.0.0.1:50273
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (300.333µs) from=127.0.0.1:50274
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (241.666µs) from=127.0.0.1:50275
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (249.334µs) from=127.0.0.1:50276
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (307.667µs) from=127.0.0.1:50277
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (246µs) from=127.0.0.1:50278
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (251.667µs) from=127.0.0.1:50279
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:50280
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (275.666µs) from=127.0.0.1:50281
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (257µs) from=127.0.0.1:50282
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (233.334µs) from=127.0.0.1:50283
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:50284
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (247µs) from=127.0.0.1:50285
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (283.334µs) from=127.0.0.1:50286
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (258.666µs) from=127.0.0.1:50287
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (250.667µs) from=127.0.0.1:50288
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (251µs) from=127.0.0.1:50289
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (284.334µs) from=127.0.0.1:50290
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:50291
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:50292
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/catalog/nodes (344.666µs) from=127.0.0.1:50294
2016/03/28 05:53:29 [DEBUG] http: Request PUT /v1/session/create (239.318666ms) from=127.0.0.1:50295
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?keys= (580µs) from=127.0.0.1:50297
2016/03/28 05:53:29 [DEBUG] http: Request PUT /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/ack?acquire=7dfaf125-e7e4-7560-f215-cb3b8d429b9a (329.142ms) from=127.0.0.1:50296
2016/03/28 05:53:29 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?index=1&keys= (324.101ms) from=127.0.0.1:50298
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?index=5&keys= (364.549667ms) from=127.0.0.1:50299
2016/03/28 05:53:30 [DEBUG] http: Request PUT /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/exit?acquire=7dfaf125-e7e4-7560-f215-cb3b8d429b9a (361.622667ms) from=127.0.0.1:50300
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/exit (541.334µs) from=127.0.0.1:50301
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?index=6&keys= (248.600334ms) from=127.0.0.1:50302
2016/03/28 05:53:30 [DEBUG] http: Request PUT /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/random?acquire=7dfaf125-e7e4-7560-f215-cb3b8d429b9a (245.880333ms) from=127.0.0.1:50303
2016/03/28 05:53:30 [ERR] http: Request PUT /v1/session/renew/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c, error: No cluster leader from=127.0.0.1:53979
2016/03/28 05:53:30 [DEBUG] http: Request PUT /v1/session/renew/fd4776b7-6c44-31ea-32f4-c2b6f58f0c0c (215.666µs) from=127.0.0.1:53979
2016/03/28 05:53:30 [DEBUG] http: Request PUT /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/out/00000?acquire=7dfaf125-e7e4-7560-f215-cb3b8d429b9a (183.627667ms) from=127.0.0.1:50305
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?index=7&keys= (185.532334ms) from=127.0.0.1:50304
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/out/00000 (578.334µs) from=127.0.0.1:50307
2016/03/28 05:53:30 [DEBUG] http: Request PUT /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/out/00001?acquire=7dfaf125-e7e4-7560-f215-cb3b8d429b9a (146.669666ms) from=127.0.0.1:50309
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/?index=8&keys= (147.912333ms) from=127.0.0.1:50308
2016/03/28 05:53:30 [DEBUG] http: Request GET /v1/kv/_rexec/7dfaf125-e7e4-7560-f215-cb3b8d429b9a/foo/out/00001 (672µs) from=127.0.0.1:50310
2016/03/28 05:53:31 [DEBUG] http: Shutting down http server (127.0.0.1:10481)
--- PASS: TestExecCommand_StreamResults (4.18s)
=== RUN   TestForceLeaveCommand_implements
--- PASS: TestForceLeaveCommand_implements (0.00s)
=== RUN   TestForceLeaveCommandRun
2016/03/28 05:53:34 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/03/28 05:53:34 [INFO] agent.rpc: Accepted client: 127.0.0.1:34560
2016/03/28 05:53:34 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/03/28 05:53:34 [DEBUG] http: Shutting down http server (127.0.0.1:10491)
--- PASS: TestForceLeaveCommandRun (3.37s)
=== RUN   TestForceLeaveCommandRun_noAddrs
--- PASS: TestForceLeaveCommandRun_noAddrs (0.00s)
=== RUN   TestInfoCommand_implements
--- PASS: TestInfoCommand_implements (0.00s)
=== RUN   TestInfoCommandRun
2016/03/28 05:53:34 [ERR] http: Request PUT /v1/session/renew/7dfaf125-e7e4-7560-f215-cb3b8d429b9a, error: No cluster leader from=127.0.0.1:50315
2016/03/28 05:53:34 [DEBUG] http: Request PUT /v1/session/renew/7dfaf125-e7e4-7560-f215-cb3b8d429b9a (251.666µs) from=127.0.0.1:50315
2016/03/28 05:53:35 [INFO] agent.rpc: Accepted client: 127.0.0.1:58070
2016/03/28 05:53:35 [DEBUG] http: Shutting down http server (127.0.0.1:10511)
--- PASS: TestInfoCommandRun (1.25s)
=== RUN   TestJoinCommand_implements
--- PASS: TestJoinCommand_implements (0.00s)
=== RUN   TestJoinCommandRun
2016/03/28 05:53:37 [INFO] agent.rpc: Accepted client: 127.0.0.1:53653
2016/03/28 05:53:38 [DEBUG] http: Shutting down http server (127.0.0.1:10531)
2016/03/28 05:53:38 [DEBUG] http: Shutting down http server (127.0.0.1:10521)
--- PASS: TestJoinCommandRun (2.91s)
=== RUN   TestJoinCommandRun_wan
2016/03/28 05:53:40 [INFO] agent.rpc: Accepted client: 127.0.0.1:44474
2016/03/28 05:53:40 [DEBUG] http: Shutting down http server (127.0.0.1:10551)
2016/03/28 05:53:41 [DEBUG] http: Shutting down http server (127.0.0.1:10541)
--- PASS: TestJoinCommandRun_wan (2.43s)
=== RUN   TestJoinCommandRun_noAddrs
--- PASS: TestJoinCommandRun_noAddrs (0.00s)
=== RUN   TestKeygenCommand_implements
--- PASS: TestKeygenCommand_implements (0.00s)
=== RUN   TestKeygenCommand
--- PASS: TestKeygenCommand (0.00s)
=== RUN   TestKeyringCommand_implements
--- PASS: TestKeyringCommand_implements (0.00s)
=== RUN   TestKeyringCommandRun
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48215
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48217
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48218
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48219
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48220
2016/03/28 05:53:41 [INFO] agent.rpc: Accepted client: 127.0.0.1:48221
2016/03/28 05:53:42 [DEBUG] http: Shutting down http server (127.0.0.1:10561)
--- PASS: TestKeyringCommandRun (1.47s)
=== RUN   TestKeyringCommandRun_help
--- PASS: TestKeyringCommandRun_help (0.00s)
=== RUN   TestKeyringCommandRun_failedConnection
--- PASS: TestKeyringCommandRun_failedConnection (0.00s)
=== RUN   TestLeaveCommand_implements
--- PASS: TestLeaveCommand_implements (0.00s)
=== RUN   TestLeaveCommandRun
2016/03/28 05:53:43 [INFO] agent.rpc: Accepted client: 127.0.0.1:39133
2016/03/28 05:53:43 [INFO] agent.rpc: Graceful leave triggered
2016/03/28 05:53:44 [DEBUG] http: Shutting down http server (127.0.0.1:10571)
--- PASS: TestLeaveCommandRun (1.88s)
=== RUN   TestLockCommand_implements
--- PASS: TestLockCommand_implements (0.00s)
=== RUN   TestLockCommand_BadArgs
--- PASS: TestLockCommand_BadArgs (0.00s)
=== RUN   TestLockCommand_Run
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48613
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (265µs) from=127.0.0.1:48613
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48614
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (303.666µs) from=127.0.0.1:48614
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48615
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (298.666µs) from=127.0.0.1:48615
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48616
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (302.666µs) from=127.0.0.1:48616
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48617
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (310.667µs) from=127.0.0.1:48617
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48618
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (355.333µs) from=127.0.0.1:48618
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48619
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (340.666µs) from=127.0.0.1:48619
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48620
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:48620
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48621
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (242.667µs) from=127.0.0.1:48621
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48622
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (224.334µs) from=127.0.0.1:48622
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48623
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:48623
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48624
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (225.334µs) from=127.0.0.1:48624
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48625
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (297µs) from=127.0.0.1:48625
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48626
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (365.666µs) from=127.0.0.1:48626
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48627
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (423µs) from=127.0.0.1:48627
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48628
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:48628
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48629
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (310.666µs) from=127.0.0.1:48629
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48630
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:48630
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48631
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (295.334µs) from=127.0.0.1:48631
2016/03/28 05:53:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48632
2016/03/28 05:53:45 [DEBUG] http: Request GET /v1/catalog/nodes (241.667µs) from=127.0.0.1:48632
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48633
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:48633
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48634
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (238.666µs) from=127.0.0.1:48634
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48635
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (249.334µs) from=127.0.0.1:48635
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48636
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (308.334µs) from=127.0.0.1:48636
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48637
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:48637
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48638
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (329.666µs) from=127.0.0.1:48638
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48639
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:48639
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48640
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (338.667µs) from=127.0.0.1:48640
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48641
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (622µs) from=127.0.0.1:48641
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48642
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (301.334µs) from=127.0.0.1:48642
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48643
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (358.666µs) from=127.0.0.1:48643
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48644
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:48644
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48645
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (326µs) from=127.0.0.1:48645
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48646
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (311.333µs) from=127.0.0.1:48646
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48647
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (321µs) from=127.0.0.1:48647
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48648
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:48648
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48649
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (301.666µs) from=127.0.0.1:48649
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48650
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (294.667µs) from=127.0.0.1:48650
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48651
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:48651
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48652
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:48652
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48653
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:48653
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48654
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (319.666µs) from=127.0.0.1:48654
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48655
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (234.333µs) from=127.0.0.1:48655
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48656
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (305.667µs) from=127.0.0.1:48656
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48657
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (366.333µs) from=127.0.0.1:48657
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48658
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (242.666µs) from=127.0.0.1:48658
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48659
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (305.667µs) from=127.0.0.1:48659
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48660
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (292.667µs) from=127.0.0.1:48660
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48661
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (300.666µs) from=127.0.0.1:48661
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48662
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:48662
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48663
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:48663
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48664
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:48664
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48665
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (269µs) from=127.0.0.1:48665
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48666
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (258.333µs) from=127.0.0.1:48666
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48667
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (288µs) from=127.0.0.1:48667
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48668
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (337.333µs) from=127.0.0.1:48668
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48669
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (489.333µs) from=127.0.0.1:48669
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48670
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (233.666µs) from=127.0.0.1:48670
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48671
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (269.334µs) from=127.0.0.1:48671
2016/03/28 05:53:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48672
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (318.333µs) from=127.0.0.1:48672
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (232.666µs) from=127.0.0.1:48673
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (220µs) from=127.0.0.1:48674
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (241µs) from=127.0.0.1:48675
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:48676
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (233.333µs) from=127.0.0.1:48677
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (217.666µs) from=127.0.0.1:48678
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (222.334µs) from=127.0.0.1:48679
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (215.333µs) from=127.0.0.1:48680
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (325µs) from=127.0.0.1:48681
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (210.667µs) from=127.0.0.1:48682
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:48683
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:48684
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (206.666µs) from=127.0.0.1:48685
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (237µs) from=127.0.0.1:48686
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:48687
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (222.667µs) from=127.0.0.1:48688
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:48689
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (220.666µs) from=127.0.0.1:48690
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (224.667µs) from=127.0.0.1:48691
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (207µs) from=127.0.0.1:48692
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:48693
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (236µs) from=127.0.0.1:48694
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:48695
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:48696
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (214.667µs) from=127.0.0.1:48697
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:48698
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (229.334µs) from=127.0.0.1:48699
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (213.334µs) from=127.0.0.1:48700
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (216µs) from=127.0.0.1:48701
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:48702
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:48703
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:48704
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (222µs) from=127.0.0.1:48705
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:48706
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (207.333µs) from=127.0.0.1:48707
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:48708
2016/03/28 05:53:46 [DEBUG] http: Request GET /v1/catalog/nodes (212.667µs) from=127.0.0.1:48709
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:48710
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:48711
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:48712
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (209.334µs) from=127.0.0.1:48713
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:48714
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (206.333µs) from=127.0.0.1:48715
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (230.666µs) from=127.0.0.1:48716
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:48717
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:48718
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (202.667µs) from=127.0.0.1:48719
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (201.666µs) from=127.0.0.1:48720
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (209µs) from=127.0.0.1:48721
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:48722
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (202.667µs) from=127.0.0.1:48723
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:48724
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:48725
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (205.333µs) from=127.0.0.1:48726
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (202.666µs) from=127.0.0.1:48727
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (233.333µs) from=127.0.0.1:48728
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (239.667µs) from=127.0.0.1:48729
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (198.667µs) from=127.0.0.1:48730
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (199.333µs) from=127.0.0.1:48731
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/catalog/nodes (251.666µs) from=127.0.0.1:48732
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/agent/self (643µs) from=127.0.0.1:48733
2016/03/28 05:53:47 [DEBUG] http: Request PUT /v1/session/create (276.254667ms) from=127.0.0.1:48734
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (513.334µs) from=127.0.0.1:48735
2016/03/28 05:53:47 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=4dd8d4c1-6ac1-7821-ba45-18b09fe17709&flags=3304740253564472344 (405.541ms) from=127.0.0.1:48736
2016/03/28 05:53:47 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.254ms) from=127.0.0.1:48737
2016/03/28 05:53:48 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=4dd8d4c1-6ac1-7821-ba45-18b09fe17709 (460.222667ms) from=127.0.0.1:48739
2016/03/28 05:53:48 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (467.203333ms) from=127.0.0.1:48738
2016/03/28 05:53:48 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (985.667µs) from=127.0.0.1:48740
2016/03/28 05:53:48 [DEBUG] http: Request PUT /v1/session/destroy/4dd8d4c1-6ac1-7821-ba45-18b09fe17709 (330.896666ms) from=127.0.0.1:48741
2016/03/28 05:53:49 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (615.335667ms) from=127.0.0.1:48742
2016/03/28 05:53:49 [DEBUG] http: Shutting down http server (127.0.0.1:10581)
--- PASS: TestLockCommand_Run (4.96s)
=== RUN   TestLockCommand_Try_Lock
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36272
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:36272
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36273
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (235µs) from=127.0.0.1:36273
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36274
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:36274
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36275
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:36275
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36276
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (254.334µs) from=127.0.0.1:36276
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36277
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (330.333µs) from=127.0.0.1:36277
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36278
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:36278
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36279
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:36279
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36280
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:36280
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36281
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (265.334µs) from=127.0.0.1:36281
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36282
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (282.334µs) from=127.0.0.1:36282
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36283
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:36283
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36284
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:36284
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36285
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (300.666µs) from=127.0.0.1:36285
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36286
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:36286
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36287
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (401µs) from=127.0.0.1:36287
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36288
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:36288
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36289
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:36289
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36290
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (307.667µs) from=127.0.0.1:36290
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36291
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (406.334µs) from=127.0.0.1:36291
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36292
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (347µs) from=127.0.0.1:36292
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36293
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (237.667µs) from=127.0.0.1:36293
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36294
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (304.667µs) from=127.0.0.1:36294
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36295
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (219.667µs) from=127.0.0.1:36295
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36296
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:36296
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36297
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:36297
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36298
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:36298
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36299
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:36299
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36300
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (328.333µs) from=127.0.0.1:36300
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36301
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (323µs) from=127.0.0.1:36301
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36302
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (281.666µs) from=127.0.0.1:36302
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36303
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (345.666µs) from=127.0.0.1:36303
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36304
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (440.334µs) from=127.0.0.1:36304
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36305
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:36305
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36306
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (296.666µs) from=127.0.0.1:36306
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36307
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:36307
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36308
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (325.333µs) from=127.0.0.1:36308
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36309
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:36309
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36310
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (275.667µs) from=127.0.0.1:36310
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36311
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (356.334µs) from=127.0.0.1:36311
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36312
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (319.667µs) from=127.0.0.1:36312
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36313
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (356.333µs) from=127.0.0.1:36313
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36314
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (257.667µs) from=127.0.0.1:36314
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36315
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (229µs) from=127.0.0.1:36315
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36316
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:36316
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36317
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (278.334µs) from=127.0.0.1:36317
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36318
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (418.667µs) from=127.0.0.1:36318
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36319
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:36319
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36320
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:36320
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36321
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:36321
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36322
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (326.667µs) from=127.0.0.1:36322
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36323
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (376.667µs) from=127.0.0.1:36323
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36324
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:36324
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36325
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:36325
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36326
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (337.667µs) from=127.0.0.1:36326
2016/03/28 05:53:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36327
2016/03/28 05:53:50 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:36327
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36328
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (294.667µs) from=127.0.0.1:36328
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36329
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:36329
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36330
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (337.333µs) from=127.0.0.1:36330
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36331
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:36331
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36332
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:36332
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36333
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (419.667µs) from=127.0.0.1:36333
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36334
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (496.667µs) from=127.0.0.1:36334
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36335
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (430.666µs) from=127.0.0.1:36335
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36336
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (426.334µs) from=127.0.0.1:36336
2016/03/28 05:53:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36337
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (438µs) from=127.0.0.1:36337
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (238.333µs) from=127.0.0.1:36338
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (207.666µs) from=127.0.0.1:36339
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (210.667µs) from=127.0.0.1:36340
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (319.666µs) from=127.0.0.1:36341
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (199.333µs) from=127.0.0.1:36342
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (196µs) from=127.0.0.1:36343
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:36344
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:36345
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (205.334µs) from=127.0.0.1:36346
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (232.333µs) from=127.0.0.1:36347
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (381.666µs) from=127.0.0.1:36348
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (214.667µs) from=127.0.0.1:36349
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (369µs) from=127.0.0.1:36350
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (331.333µs) from=127.0.0.1:36351
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (378µs) from=127.0.0.1:36352
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (218.667µs) from=127.0.0.1:36353
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (221.333µs) from=127.0.0.1:36354
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (205.666µs) from=127.0.0.1:36355
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (212.666µs) from=127.0.0.1:36356
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (198µs) from=127.0.0.1:36357
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (265.334µs) from=127.0.0.1:36358
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (215.667µs) from=127.0.0.1:36359
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (203.667µs) from=127.0.0.1:36360
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (329.666µs) from=127.0.0.1:36361
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (220µs) from=127.0.0.1:36362
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (206µs) from=127.0.0.1:36363
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:36364
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (246.667µs) from=127.0.0.1:36365
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (225.666µs) from=127.0.0.1:36366
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (210.334µs) from=127.0.0.1:36367
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (211µs) from=127.0.0.1:36368
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (210.333µs) from=127.0.0.1:36369
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (241µs) from=127.0.0.1:36371
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (221.333µs) from=127.0.0.1:36372
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:36373
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (208.333µs) from=127.0.0.1:36374
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (204.666µs) from=127.0.0.1:36375
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (241.333µs) from=127.0.0.1:36376
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (238.334µs) from=127.0.0.1:36377
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (238.333µs) from=127.0.0.1:36378
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (219.333µs) from=127.0.0.1:36379
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (217.333µs) from=127.0.0.1:36380
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (214.666µs) from=127.0.0.1:36381
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (210.334µs) from=127.0.0.1:36382
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (229.667µs) from=127.0.0.1:36383
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:36384
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (229.667µs) from=127.0.0.1:36385
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (214µs) from=127.0.0.1:36386
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:36387
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (217.333µs) from=127.0.0.1:36388
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (226.333µs) from=127.0.0.1:36389
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (240.666µs) from=127.0.0.1:36390
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (219µs) from=127.0.0.1:36391
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (211µs) from=127.0.0.1:36392
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (208.334µs) from=127.0.0.1:36393
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (216.667µs) from=127.0.0.1:36394
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (222.666µs) from=127.0.0.1:36395
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:36396
2016/03/28 05:53:51 [DEBUG] http: Request GET /v1/agent/self (628µs) from=127.0.0.1:36397
2016/03/28 05:53:52 [DEBUG] http: Request PUT /v1/session/create (163.760333ms) from=127.0.0.1:36398
2016/03/28 05:53:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=10000ms (1.394333ms) from=127.0.0.1:36399
2016/03/28 05:53:52 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=f4444640-0263-723e-4ca5-cd4e93b523d4&flags=3304740253564472344 (237.676667ms) from=127.0.0.1:36400
2016/03/28 05:53:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.049ms) from=127.0.0.1:36401
2016/03/28 05:53:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (478.856333ms) from=127.0.0.1:36402
2016/03/28 05:53:52 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=f4444640-0263-723e-4ca5-cd4e93b523d4 (472.894ms) from=127.0.0.1:36403
2016/03/28 05:53:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (576.333µs) from=127.0.0.1:36404
2016/03/28 05:53:53 [DEBUG] http: Request PUT /v1/session/destroy/f4444640-0263-723e-4ca5-cd4e93b523d4 (653.158667ms) from=127.0.0.1:36405
2016/03/28 05:53:53 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (1.025825s) from=127.0.0.1:36406
2016/03/28 05:53:53 [DEBUG] http: Shutting down http server (127.0.0.1:10591)
--- PASS: TestLockCommand_Try_Lock (4.64s)
=== RUN   TestLockCommand_Try_Semaphore
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33030
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (312.333µs) from=127.0.0.1:33030
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33031
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (335.333µs) from=127.0.0.1:33031
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33032
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:33032
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33033
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:33033
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33034
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:33034
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33035
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (240µs) from=127.0.0.1:33035
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33036
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (245.333µs) from=127.0.0.1:33036
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33037
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (325.333µs) from=127.0.0.1:33037
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33038
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:33038
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33039
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (316.333µs) from=127.0.0.1:33039
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33040
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (227.667µs) from=127.0.0.1:33040
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33041
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (241µs) from=127.0.0.1:33041
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33042
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (226.667µs) from=127.0.0.1:33042
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33043
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (238.334µs) from=127.0.0.1:33043
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33044
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (263.666µs) from=127.0.0.1:33044
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33045
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:33045
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33046
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (230.667µs) from=127.0.0.1:33046
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33047
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:33047
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33048
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (290.333µs) from=127.0.0.1:33048
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33049
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:33049
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33050
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:33050
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33051
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (233.667µs) from=127.0.0.1:33051
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33052
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:33052
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33053
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:33053
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33054
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:33054
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33055
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (247.667µs) from=127.0.0.1:33055
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33056
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (237µs) from=127.0.0.1:33056
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33057
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:33057
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33058
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (219.667µs) from=127.0.0.1:33058
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33059
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:33059
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33060
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:33060
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33061
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (224.667µs) from=127.0.0.1:33061
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33062
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:33062
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33063
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (221.667µs) from=127.0.0.1:33063
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33064
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:33064
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33065
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:33065
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33066
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:33066
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33067
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (260.334µs) from=127.0.0.1:33067
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33068
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:33068
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33069
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (565.333µs) from=127.0.0.1:33069
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33070
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:33070
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33071
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (236.666µs) from=127.0.0.1:33071
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33072
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (229.667µs) from=127.0.0.1:33072
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33073
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (441µs) from=127.0.0.1:33073
2016/03/28 05:53:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33074
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (230.667µs) from=127.0.0.1:33074
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:33075
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (245µs) from=127.0.0.1:33076
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (225.667µs) from=127.0.0.1:33077
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (215.666µs) from=127.0.0.1:33078
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (212.334µs) from=127.0.0.1:33079
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:33080
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (236.333µs) from=127.0.0.1:33081
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:33082
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (262µs) from=127.0.0.1:33083
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:33084
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (206.667µs) from=127.0.0.1:33085
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (288µs) from=127.0.0.1:33086
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (381µs) from=127.0.0.1:33087
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:33088
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (221.666µs) from=127.0.0.1:33089
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:33090
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (214.666µs) from=127.0.0.1:33091
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (245.333µs) from=127.0.0.1:33092
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (274.333µs) from=127.0.0.1:33093
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:33094
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:33095
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (219.333µs) from=127.0.0.1:33096
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (212.333µs) from=127.0.0.1:33097
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (208.333µs) from=127.0.0.1:33098
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (213.334µs) from=127.0.0.1:33099
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (243µs) from=127.0.0.1:33100
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (206µs) from=127.0.0.1:33101
2016/03/28 05:53:55 [DEBUG] http: Request GET /v1/catalog/nodes (208µs) from=127.0.0.1:33102
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (446.333µs) from=127.0.0.1:33103
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (227.666µs) from=127.0.0.1:33104
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (219µs) from=127.0.0.1:33105
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (232.334µs) from=127.0.0.1:33106
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (218.334µs) from=127.0.0.1:33107
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (222.666µs) from=127.0.0.1:33108
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (205.667µs) from=127.0.0.1:33109
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (202µs) from=127.0.0.1:33110
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (235.666µs) from=127.0.0.1:33111
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (219.333µs) from=127.0.0.1:33112
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (219.334µs) from=127.0.0.1:33113
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (224µs) from=127.0.0.1:33114
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (229.333µs) from=127.0.0.1:33115
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:33116
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (240.666µs) from=127.0.0.1:33117
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (209.334µs) from=127.0.0.1:33118
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/catalog/nodes (260.666µs) from=127.0.0.1:33119
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/agent/self (695.334µs) from=127.0.0.1:33120
2016/03/28 05:53:56 [DEBUG] http: Request PUT /v1/session/create (184.982ms) from=127.0.0.1:33121
2016/03/28 05:53:56 [DEBUG] http: Request PUT /v1/kv/test/prefix/ea5382e2-bd07-9a40-adba-567f1ca0e493?acquire=ea5382e2-bd07-9a40-adba-567f1ca0e493&flags=16210313421097356768 (319.164333ms) from=127.0.0.1:33122
2016/03/28 05:53:56 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=10000ms (769.666µs) from=127.0.0.1:33123
2016/03/28 05:53:57 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (294.225ms) from=127.0.0.1:33124
2016/03/28 05:53:57 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.413333ms) from=127.0.0.1:33125
2016/03/28 05:53:57 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (522.667µs) from=127.0.0.1:33127
2016/03/28 05:53:57 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (282.085667ms) from=127.0.0.1:33128
2016/03/28 05:53:57 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (311.788ms) from=127.0.0.1:33126
2016/03/28 05:53:57 [DEBUG] http: Request DELETE /v1/kv/test/prefix/ea5382e2-bd07-9a40-adba-567f1ca0e493 (185.816333ms) from=127.0.0.1:33129
2016/03/28 05:53:57 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (620µs) from=127.0.0.1:33130
2016/03/28 05:53:57 [DEBUG] http: Request PUT /v1/session/destroy/ea5382e2-bd07-9a40-adba-567f1ca0e493 (219.688334ms) from=127.0.0.1:33131
2016/03/28 05:53:57 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (415.721667ms) from=127.0.0.1:33133
2016/03/28 05:53:58 [DEBUG] http: Shutting down http server (127.0.0.1:10601)
--- PASS: TestLockCommand_Try_Semaphore (4.22s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Default
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48727
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (282.666µs) from=127.0.0.1:48727
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48728
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:48728
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48729
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (230.333µs) from=127.0.0.1:48729
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48730
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (238.333µs) from=127.0.0.1:48730
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48731
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (223.333µs) from=127.0.0.1:48731
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48732
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (241.666µs) from=127.0.0.1:48732
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48733
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (246.666µs) from=127.0.0.1:48733
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48734
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (231.334µs) from=127.0.0.1:48734
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48735
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:48735
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48736
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:48736
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48737
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (222.667µs) from=127.0.0.1:48737
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48738
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (222.333µs) from=127.0.0.1:48738
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48739
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (305.333µs) from=127.0.0.1:48739
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48740
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (219.667µs) from=127.0.0.1:48740
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48741
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (232µs) from=127.0.0.1:48741
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48742
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (222.334µs) from=127.0.0.1:48742
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48743
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (235.333µs) from=127.0.0.1:48743
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48744
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (377µs) from=127.0.0.1:48744
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48745
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:48745
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48746
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (242.666µs) from=127.0.0.1:48746
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48747
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (234.667µs) from=127.0.0.1:48747
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48748
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (220µs) from=127.0.0.1:48748
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48749
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (281.333µs) from=127.0.0.1:48749
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48750
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:48750
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48751
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (329.667µs) from=127.0.0.1:48751
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48752
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (352µs) from=127.0.0.1:48752
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48753
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (324.334µs) from=127.0.0.1:48753
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48754
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (255µs) from=127.0.0.1:48754
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48755
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (331.333µs) from=127.0.0.1:48755
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48756
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (223.666µs) from=127.0.0.1:48756
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48757
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (220.666µs) from=127.0.0.1:48757
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48758
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:48758
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48759
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:48759
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48760
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (223.333µs) from=127.0.0.1:48760
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48761
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (241.667µs) from=127.0.0.1:48761
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48762
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (335.333µs) from=127.0.0.1:48762
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48763
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:48763
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48764
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:48764
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48765
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:48765
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48766
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:48766
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48767
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (238µs) from=127.0.0.1:48767
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48768
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:48768
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48769
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (258.666µs) from=127.0.0.1:48769
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48770
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:48770
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48771
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:48771
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48772
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (226.333µs) from=127.0.0.1:48772
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48773
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (599.333µs) from=127.0.0.1:48773
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48774
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:48774
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48775
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:48775
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48776
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (233µs) from=127.0.0.1:48776
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48777
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (330µs) from=127.0.0.1:48777
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48778
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (244.667µs) from=127.0.0.1:48778
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48779
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (235.667µs) from=127.0.0.1:48779
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48780
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:48780
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48781
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (229.667µs) from=127.0.0.1:48781
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48782
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:48782
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48783
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:48783
2016/03/28 05:53:59 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48784
2016/03/28 05:53:59 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:48784
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48785
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:48785
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48786
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:48786
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48787
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (229µs) from=127.0.0.1:48787
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48788
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (238.333µs) from=127.0.0.1:48788
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48789
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (222.667µs) from=127.0.0.1:48789
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48790
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (234µs) from=127.0.0.1:48790
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48791
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:48791
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48792
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (734µs) from=127.0.0.1:48792
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48793
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (230µs) from=127.0.0.1:48793
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48794
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:48794
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48795
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (239µs) from=127.0.0.1:48795
2016/03/28 05:54:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:48796
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (235µs) from=127.0.0.1:48796
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (303.667µs) from=127.0.0.1:48797
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (242.333µs) from=127.0.0.1:48798
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:48799
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:48800
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:48801
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:48802
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:48803
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (241.333µs) from=127.0.0.1:48804
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (226.667µs) from=127.0.0.1:48805
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (236.667µs) from=127.0.0.1:48806
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (219.333µs) from=127.0.0.1:48807
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (237.666µs) from=127.0.0.1:48808
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (218.666µs) from=127.0.0.1:48809
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (380.333µs) from=127.0.0.1:48810
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (224.667µs) from=127.0.0.1:48811
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (209.667µs) from=127.0.0.1:48812
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:48813
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:48814
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:48815
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:48816
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (208.666µs) from=127.0.0.1:48817
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (209.333µs) from=127.0.0.1:48818
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (208.333µs) from=127.0.0.1:48819
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (212µs) from=127.0.0.1:48820
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:48821
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (510.333µs) from=127.0.0.1:48822
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (249.333µs) from=127.0.0.1:48823
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (259.667µs) from=127.0.0.1:48824
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (216µs) from=127.0.0.1:48825
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (208.334µs) from=127.0.0.1:48826
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (215µs) from=127.0.0.1:48827
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:48828
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:48829
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (209.667µs) from=127.0.0.1:48830
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:48831
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:48832
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (216.667µs) from=127.0.0.1:48833
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (231.333µs) from=127.0.0.1:48834
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (228.666µs) from=127.0.0.1:48835
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (216.333µs) from=127.0.0.1:48836
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:48837
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:48838
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (230.667µs) from=127.0.0.1:48839
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (212µs) from=127.0.0.1:48840
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (249.334µs) from=127.0.0.1:48841
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (484.333µs) from=127.0.0.1:48842
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:48843
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:48844
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (266µs) from=127.0.0.1:48845
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:48846
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (248µs) from=127.0.0.1:48847
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (214.334µs) from=127.0.0.1:48848
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:48849
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (208.333µs) from=127.0.0.1:48850
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (259.666µs) from=127.0.0.1:48851
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (219.334µs) from=127.0.0.1:48852
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (221.334µs) from=127.0.0.1:48853
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (232.666µs) from=127.0.0.1:48854
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:48855
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (219µs) from=127.0.0.1:48856
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (214.667µs) from=127.0.0.1:48857
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (215.333µs) from=127.0.0.1:48858
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (226.667µs) from=127.0.0.1:48859
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:48860
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (245µs) from=127.0.0.1:48861
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213.666µs) from=127.0.0.1:48862
2016/03/28 05:54:00 [DEBUG] http: Request GET /v1/catalog/nodes (213.333µs) from=127.0.0.1:48863
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:48864
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/catalog/nodes (210.333µs) from=127.0.0.1:48865
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:48866
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/agent/self (611µs) from=127.0.0.1:48867
2016/03/28 05:54:01 [DEBUG] http: Request PUT /v1/session/create (200.970333ms) from=127.0.0.1:48868
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (502.667µs) from=127.0.0.1:48869
2016/03/28 05:54:01 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=2a3b888c-d47b-a7a3-1734-7a01ffa8fa02&flags=3304740253564472344 (405.256334ms) from=127.0.0.1:48870
2016/03/28 05:54:01 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (923.334µs) from=127.0.0.1:48871
2016/03/28 05:54:02 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=2a3b888c-d47b-a7a3-1734-7a01ffa8fa02 (642.148666ms) from=127.0.0.1:48873
2016/03/28 05:54:02 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (646.648667ms) from=127.0.0.1:48872
2016/03/28 05:54:02 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (528.666µs) from=127.0.0.1:48875
2016/03/28 05:54:02 [DEBUG] http: Request PUT /v1/session/destroy/2a3b888c-d47b-a7a3-1734-7a01ffa8fa02 (383.724333ms) from=127.0.0.1:48874
2016/03/28 05:54:03 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (688.100333ms) from=127.0.0.1:48876
2016/03/28 05:54:03 [DEBUG] http: Shutting down http server (127.0.0.1:10611)
--- PASS: TestLockCommand_MonitorRetry_Lock_Default (5.40s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Default
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44049
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (356.333µs) from=127.0.0.1:44049
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44050
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (246.667µs) from=127.0.0.1:44050
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44051
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (245.333µs) from=127.0.0.1:44051
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44052
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (224.666µs) from=127.0.0.1:44052
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44053
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (238.334µs) from=127.0.0.1:44053
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44054
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (300.667µs) from=127.0.0.1:44054
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44055
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:44055
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44056
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (296.667µs) from=127.0.0.1:44056
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44057
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:44057
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44058
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:44058
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44059
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:44059
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44060
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (250µs) from=127.0.0.1:44060
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44061
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (339µs) from=127.0.0.1:44061
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44062
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (337µs) from=127.0.0.1:44062
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44063
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (216.333µs) from=127.0.0.1:44063
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44064
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (418.333µs) from=127.0.0.1:44064
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44065
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (409.667µs) from=127.0.0.1:44065
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44066
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (370.333µs) from=127.0.0.1:44066
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44067
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (239.667µs) from=127.0.0.1:44067
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44068
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (229µs) from=127.0.0.1:44068
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44069
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (231.334µs) from=127.0.0.1:44069
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44070
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (244.667µs) from=127.0.0.1:44070
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44071
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:44071
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44072
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (232.334µs) from=127.0.0.1:44072
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44073
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (224.667µs) from=127.0.0.1:44073
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44074
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:44074
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44075
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:44075
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44076
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (236.333µs) from=127.0.0.1:44076
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44077
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:44077
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44078
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (240.334µs) from=127.0.0.1:44078
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44079
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:44079
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44080
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:44080
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44081
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (234.333µs) from=127.0.0.1:44081
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44082
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (309.334µs) from=127.0.0.1:44082
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44083
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (231.667µs) from=127.0.0.1:44083
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44084
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (305µs) from=127.0.0.1:44084
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44085
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (228.666µs) from=127.0.0.1:44085
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44086
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (237µs) from=127.0.0.1:44086
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44087
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (231µs) from=127.0.0.1:44087
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44088
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (717.334µs) from=127.0.0.1:44088
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44089
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (361.666µs) from=127.0.0.1:44089
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44090
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (232µs) from=127.0.0.1:44090
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44091
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (326.667µs) from=127.0.0.1:44091
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44092
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (215.667µs) from=127.0.0.1:44092
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44093
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (243.667µs) from=127.0.0.1:44093
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44094
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (227.334µs) from=127.0.0.1:44094
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44095
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (230.333µs) from=127.0.0.1:44095
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44096
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (249µs) from=127.0.0.1:44096
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44097
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (232µs) from=127.0.0.1:44097
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44098
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:44098
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44099
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (228.666µs) from=127.0.0.1:44099
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44100
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (465.333µs) from=127.0.0.1:44100
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44101
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:44101
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44102
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (234.333µs) from=127.0.0.1:44102
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44103
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (556.334µs) from=127.0.0.1:44103
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44104
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (251µs) from=127.0.0.1:44104
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44105
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (226.666µs) from=127.0.0.1:44105
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44106
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (352.333µs) from=127.0.0.1:44106
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44107
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:44107
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44108
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (219.666µs) from=127.0.0.1:44108
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44109
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (236µs) from=127.0.0.1:44109
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44110
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (213.333µs) from=127.0.0.1:44110
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44111
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (224.667µs) from=127.0.0.1:44111
2016/03/28 05:54:05 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44112
2016/03/28 05:54:05 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:44112
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44113
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (442.667µs) from=127.0.0.1:44113
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44114
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:44114
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44115
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:44115
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44116
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (237.667µs) from=127.0.0.1:44116
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44117
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:44117
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44118
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (219.666µs) from=127.0.0.1:44118
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44119
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (297.333µs) from=127.0.0.1:44119
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44120
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:44120
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44121
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:44121
2016/03/28 05:54:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44122
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (277.667µs) from=127.0.0.1:44122
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (239µs) from=127.0.0.1:44123
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (207.667µs) from=127.0.0.1:44124
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (209.334µs) from=127.0.0.1:44125
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (196.666µs) from=127.0.0.1:44126
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:44127
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (235µs) from=127.0.0.1:44128
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (209.667µs) from=127.0.0.1:44129
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (214.333µs) from=127.0.0.1:44130
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (219.666µs) from=127.0.0.1:44131
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:44132
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (212.333µs) from=127.0.0.1:44133
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:44134
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (216.666µs) from=127.0.0.1:44135
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:44136
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:44137
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:44138
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (211µs) from=127.0.0.1:44139
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (202µs) from=127.0.0.1:44140
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (377.333µs) from=127.0.0.1:44141
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (416µs) from=127.0.0.1:44142
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (341µs) from=127.0.0.1:44143
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (210.334µs) from=127.0.0.1:44144
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:44145
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (278.334µs) from=127.0.0.1:44146
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (231.333µs) from=127.0.0.1:44147
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (210.333µs) from=127.0.0.1:44148
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (209.333µs) from=127.0.0.1:44149
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (222µs) from=127.0.0.1:44150
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (389µs) from=127.0.0.1:44151
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (214.333µs) from=127.0.0.1:44152
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (241.334µs) from=127.0.0.1:44153
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (228µs) from=127.0.0.1:44154
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:44155
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (218.667µs) from=127.0.0.1:44156
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:44157
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (208µs) from=127.0.0.1:44158
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (218.666µs) from=127.0.0.1:44159
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (199µs) from=127.0.0.1:44160
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (242.666µs) from=127.0.0.1:44161
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (220.667µs) from=127.0.0.1:44162
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:44163
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (238µs) from=127.0.0.1:44164
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:44165
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:44166
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (218.333µs) from=127.0.0.1:44167
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (284.666µs) from=127.0.0.1:44168
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (215.667µs) from=127.0.0.1:44169
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (215µs) from=127.0.0.1:44170
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (209µs) from=127.0.0.1:44171
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (222.666µs) from=127.0.0.1:44172
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (202.667µs) from=127.0.0.1:44173
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (220µs) from=127.0.0.1:44174
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:44175
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (202µs) from=127.0.0.1:44176
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (206µs) from=127.0.0.1:44177
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (219.666µs) from=127.0.0.1:44178
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (205.667µs) from=127.0.0.1:44179
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (208µs) from=127.0.0.1:44180
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (204µs) from=127.0.0.1:44181
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/catalog/nodes (248µs) from=127.0.0.1:44182
2016/03/28 05:54:06 [DEBUG] http: Request GET /v1/agent/self (624.666µs) from=127.0.0.1:44183
2016/03/28 05:54:07 [DEBUG] http: Request PUT /v1/session/create (161.464ms) from=127.0.0.1:44184
2016/03/28 05:54:07 [DEBUG] http: Request PUT /v1/kv/test/prefix/fb305edc-dc2f-02d2-e128-174cb9673385?acquire=fb305edc-dc2f-02d2-e128-174cb9673385&flags=16210313421097356768 (407.966666ms) from=127.0.0.1:44185
2016/03/28 05:54:07 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (689.666µs) from=127.0.0.1:44186
2016/03/28 05:54:07 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (404.938667ms) from=127.0.0.1:44187
2016/03/28 05:54:07 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.161ms) from=127.0.0.1:44189
2016/03/28 05:54:07 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (562.667µs) from=127.0.0.1:44191
2016/03/28 05:54:08 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (169.806667ms) from=127.0.0.1:44192
2016/03/28 05:54:08 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (179.725ms) from=127.0.0.1:44190
2016/03/28 05:54:08 [DEBUG] http: Request DELETE /v1/kv/test/prefix/fb305edc-dc2f-02d2-e128-174cb9673385 (196.659666ms) from=127.0.0.1:44193
2016/03/28 05:54:08 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (641.666µs) from=127.0.0.1:44194
2016/03/28 05:54:08 [DEBUG] http: Request PUT /v1/session/destroy/fb305edc-dc2f-02d2-e128-174cb9673385 (153.930333ms) from=127.0.0.1:44195
2016/03/28 05:54:08 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (332.997666ms) from=127.0.0.1:44196
2016/03/28 05:54:08 [DEBUG] http: Shutting down http server (127.0.0.1:10621)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Default (5.22s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Arg
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49486
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:49486
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49487
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:49487
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49488
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (251.666µs) from=127.0.0.1:49488
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49489
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (234.666µs) from=127.0.0.1:49489
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49490
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (294.334µs) from=127.0.0.1:49490
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49491
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (323µs) from=127.0.0.1:49491
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49492
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:49492
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49493
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (378µs) from=127.0.0.1:49493
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49494
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:49494
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49495
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (216.667µs) from=127.0.0.1:49495
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49496
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (230.666µs) from=127.0.0.1:49496
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49497
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:49497
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49498
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:49498
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49499
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (204.667µs) from=127.0.0.1:49499
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49500
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (207.333µs) from=127.0.0.1:49500
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49501
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:49501
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49502
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (307.334µs) from=127.0.0.1:49502
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49503
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (216.667µs) from=127.0.0.1:49503
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49504
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (279.666µs) from=127.0.0.1:49504
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49505
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (286.334µs) from=127.0.0.1:49505
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49506
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:49506
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49507
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (215µs) from=127.0.0.1:49507
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49508
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:49508
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49509
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (245.334µs) from=127.0.0.1:49509
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49510
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (242.667µs) from=127.0.0.1:49510
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49511
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (235.333µs) from=127.0.0.1:49511
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49512
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (220.667µs) from=127.0.0.1:49512
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49513
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:49513
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49514
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (272.666µs) from=127.0.0.1:49514
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49515
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (224µs) from=127.0.0.1:49515
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49516
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (222.666µs) from=127.0.0.1:49516
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49517
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (713.333µs) from=127.0.0.1:49517
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49518
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:49518
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49519
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (288.666µs) from=127.0.0.1:49519
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49520
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (240µs) from=127.0.0.1:49520
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49521
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (237.666µs) from=127.0.0.1:49521
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49522
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (218.667µs) from=127.0.0.1:49522
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49523
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (210.333µs) from=127.0.0.1:49523
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49524
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:49524
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49525
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (522µs) from=127.0.0.1:49525
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49526
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (221.666µs) from=127.0.0.1:49526
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49527
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (448.333µs) from=127.0.0.1:49527
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49528
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (468.667µs) from=127.0.0.1:49528
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49529
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:49529
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49530
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (463.667µs) from=127.0.0.1:49530
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49531
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (277.666µs) from=127.0.0.1:49531
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49532
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (251.333µs) from=127.0.0.1:49532
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49533
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (228µs) from=127.0.0.1:49533
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49534
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (247.334µs) from=127.0.0.1:49534
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49535
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (602µs) from=127.0.0.1:49535
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49536
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:49536
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49537
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:49537
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49538
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (414µs) from=127.0.0.1:49538
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49539
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (371µs) from=127.0.0.1:49539
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49540
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (257.667µs) from=127.0.0.1:49540
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49541
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (395µs) from=127.0.0.1:49541
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49542
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (342µs) from=127.0.0.1:49542
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49543
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:49543
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49544
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (327.667µs) from=127.0.0.1:49544
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49545
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:49545
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49546
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (324.666µs) from=127.0.0.1:49546
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49547
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (475.334µs) from=127.0.0.1:49547
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49548
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:49548
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49549
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (610.666µs) from=127.0.0.1:49549
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49550
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:49550
2016/03/28 05:54:10 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:49551
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:49551
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (506.667µs) from=127.0.0.1:49553
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:49554
2016/03/28 05:54:10 [DEBUG] http: Request GET /v1/catalog/nodes (249.667µs) from=127.0.0.1:49555
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:49556
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (266.334µs) from=127.0.0.1:49557
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:49558
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (326.667µs) from=127.0.0.1:49559
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (252.667µs) from=127.0.0.1:49560
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (307.667µs) from=127.0.0.1:49561
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:49562
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:49563
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:49564
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (274.333µs) from=127.0.0.1:49565
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:49566
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (246.667µs) from=127.0.0.1:49567
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (249.667µs) from=127.0.0.1:49568
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (246µs) from=127.0.0.1:49569
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (247µs) from=127.0.0.1:49570
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:49571
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:49572
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (246µs) from=127.0.0.1:49573
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:49574
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (214µs) from=127.0.0.1:49575
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:49576
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:49577
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:49578
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (213.334µs) from=127.0.0.1:49579
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (403µs) from=127.0.0.1:49580
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (525µs) from=127.0.0.1:49581
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:49582
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:49583
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (248.667µs) from=127.0.0.1:49584
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:49585
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:49586
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (243µs) from=127.0.0.1:49587
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (254.333µs) from=127.0.0.1:49588
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:49589
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (251.667µs) from=127.0.0.1:49590
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:49591
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (252.667µs) from=127.0.0.1:49592
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (247.667µs) from=127.0.0.1:49593
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (242.667µs) from=127.0.0.1:49594
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (253.334µs) from=127.0.0.1:49595
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:49596
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (309.667µs) from=127.0.0.1:49597
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:49598
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (249.666µs) from=127.0.0.1:49599
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:49600
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:49601
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (268.666µs) from=127.0.0.1:49602
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:49603
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (328µs) from=127.0.0.1:49604
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (398.667µs) from=127.0.0.1:49605
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (222.334µs) from=127.0.0.1:49606
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:49607
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:49608
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (216.666µs) from=127.0.0.1:49609
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (231.333µs) from=127.0.0.1:49610
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (482.334µs) from=127.0.0.1:49611
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:49612
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (348µs) from=127.0.0.1:49613
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (355µs) from=127.0.0.1:49614
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (248.666µs) from=127.0.0.1:49615
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (470µs) from=127.0.0.1:49616
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (249.666µs) from=127.0.0.1:49617
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (415.667µs) from=127.0.0.1:49618
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (252.666µs) from=127.0.0.1:49619
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (232.666µs) from=127.0.0.1:49620
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:49621
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:49622
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (213.667µs) from=127.0.0.1:49623
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (218.333µs) from=127.0.0.1:49624
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (334.334µs) from=127.0.0.1:49625
2016/03/28 05:54:11 [DEBUG] http: Request GET /v1/catalog/nodes (448.666µs) from=127.0.0.1:49626
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (239.334µs) from=127.0.0.1:49627
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (299µs) from=127.0.0.1:49628
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (240.666µs) from=127.0.0.1:49629
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (236.333µs) from=127.0.0.1:49630
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:49631
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (223.667µs) from=127.0.0.1:49632
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (214.667µs) from=127.0.0.1:49633
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (212.333µs) from=127.0.0.1:49634
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (211.667µs) from=127.0.0.1:49635
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (258.666µs) from=127.0.0.1:49636
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:49637
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/catalog/nodes (294.667µs) from=127.0.0.1:49638
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/agent/self (795.334µs) from=127.0.0.1:49639
2016/03/28 05:54:12 [DEBUG] http: Request PUT /v1/session/create (285.037334ms) from=127.0.0.1:49640
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (528.333µs) from=127.0.0.1:49641
2016/03/28 05:54:12 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=4275ea64-5d87-53dd-ea20-1de4b23f5dae&flags=3304740253564472344 (405.016334ms) from=127.0.0.1:49642
2016/03/28 05:54:12 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.086333ms) from=127.0.0.1:49643
2016/03/28 05:54:13 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=4275ea64-5d87-53dd-ea20-1de4b23f5dae (406.038ms) from=127.0.0.1:49645
2016/03/28 05:54:13 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (412.534ms) from=127.0.0.1:49644
2016/03/28 05:54:13 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (527.666µs) from=127.0.0.1:49646
2016/03/28 05:54:13 [DEBUG] http: Request PUT /v1/session/destroy/4275ea64-5d87-53dd-ea20-1de4b23f5dae (509.125667ms) from=127.0.0.1:49647
2016/03/28 05:54:13 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (506.65ms) from=127.0.0.1:49648
2016/03/28 05:54:13 [DEBUG] http: Shutting down http server (127.0.0.1:10631)
--- PASS: TestLockCommand_MonitorRetry_Lock_Arg (5.13s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Arg
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52307
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:52307
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52308
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:52308
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52309
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (248.667µs) from=127.0.0.1:52309
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52310
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (228µs) from=127.0.0.1:52310
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52311
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (224.334µs) from=127.0.0.1:52311
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52312
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (230µs) from=127.0.0.1:52312
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52313
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (491µs) from=127.0.0.1:52313
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52314
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (1.905ms) from=127.0.0.1:52314
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52315
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (318.333µs) from=127.0.0.1:52315
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52316
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:52316
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52317
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (562.333µs) from=127.0.0.1:52317
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52318
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (237.667µs) from=127.0.0.1:52318
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52319
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (474.333µs) from=127.0.0.1:52319
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52320
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:52320
2016/03/28 05:54:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52321
2016/03/28 05:54:14 [DEBUG] http: Request GET /v1/catalog/nodes (328µs) from=127.0.0.1:52321
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52322
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:52322
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52323
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (240.667µs) from=127.0.0.1:52323
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52324
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (233µs) from=127.0.0.1:52324
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52325
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (1.271666ms) from=127.0.0.1:52325
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52326
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:52326
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52327
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (319.667µs) from=127.0.0.1:52327
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52328
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:52328
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52329
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (338.334µs) from=127.0.0.1:52329
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52330
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (383.333µs) from=127.0.0.1:52330
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52331
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:52331
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52332
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:52332
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52333
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (275.667µs) from=127.0.0.1:52333
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52334
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:52334
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52335
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (342µs) from=127.0.0.1:52335
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52336
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (308.333µs) from=127.0.0.1:52336
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52337
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (322µs) from=127.0.0.1:52337
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52338
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (272.666µs) from=127.0.0.1:52338
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52339
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:52339
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52340
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (260.666µs) from=127.0.0.1:52340
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52341
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:52341
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52342
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:52342
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52343
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (274.666µs) from=127.0.0.1:52343
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52344
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (317.334µs) from=127.0.0.1:52344
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52345
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:52345
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52346
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (257µs) from=127.0.0.1:52346
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52347
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (269.666µs) from=127.0.0.1:52347
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52348
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:52348
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52349
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (262µs) from=127.0.0.1:52349
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52350
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (441.667µs) from=127.0.0.1:52350
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52351
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:52351
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52352
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:52352
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52353
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:52353
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52354
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (266.666µs) from=127.0.0.1:52354
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52355
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:52355
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52356
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (248.667µs) from=127.0.0.1:52356
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52357
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (532µs) from=127.0.0.1:52357
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52358
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (280.334µs) from=127.0.0.1:52358
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52359
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (397µs) from=127.0.0.1:52359
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52360
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:52360
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52361
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:52361
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52362
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (282.333µs) from=127.0.0.1:52362
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52363
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:52363
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52364
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:52364
2016/03/28 05:54:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52365
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (314.334µs) from=127.0.0.1:52365
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:52366
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:52367
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:52368
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:52369
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (293.667µs) from=127.0.0.1:52370
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (364.666µs) from=127.0.0.1:52371
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:52372
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:52373
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:52374
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:52375
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (250.334µs) from=127.0.0.1:52376
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (244.334µs) from=127.0.0.1:52377
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (247.666µs) from=127.0.0.1:52378
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (265µs) from=127.0.0.1:52379
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (497.666µs) from=127.0.0.1:52380
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (290.333µs) from=127.0.0.1:52381
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:52382
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (437µs) from=127.0.0.1:52383
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:52384
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (251.333µs) from=127.0.0.1:52385
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (244.667µs) from=127.0.0.1:52386
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (229.333µs) from=127.0.0.1:52387
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (895µs) from=127.0.0.1:52388
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (432.334µs) from=127.0.0.1:52389
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:52390
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (306µs) from=127.0.0.1:52391
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (243.333µs) from=127.0.0.1:52392
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (248.334µs) from=127.0.0.1:52393
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:52394
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (301.666µs) from=127.0.0.1:52395
2016/03/28 05:54:15 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:52396
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (246.667µs) from=127.0.0.1:52397
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:52398
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (256.334µs) from=127.0.0.1:52399
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:52400
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (246µs) from=127.0.0.1:52401
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (253.334µs) from=127.0.0.1:52402
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (325.667µs) from=127.0.0.1:52403
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (251.333µs) from=127.0.0.1:52404
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:52405
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (254.666µs) from=127.0.0.1:52406
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (2.684ms) from=127.0.0.1:52407
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/catalog/nodes (348.334µs) from=127.0.0.1:52408
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/agent/self (740.333µs) from=127.0.0.1:52409
2016/03/28 05:54:16 [DEBUG] http: Request PUT /v1/session/create (332.208333ms) from=127.0.0.1:52410
2016/03/28 05:54:16 [DEBUG] http: Request PUT /v1/kv/test/prefix/6593e29b-7e6d-36ba-4a0f-9a3801216e09?acquire=6593e29b-7e6d-36ba-4a0f-9a3801216e09&flags=16210313421097356768 (330.85ms) from=127.0.0.1:52411
2016/03/28 05:54:16 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (739.667µs) from=127.0.0.1:52412
2016/03/28 05:54:17 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (480.697ms) from=127.0.0.1:52413
2016/03/28 05:54:17 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.273ms) from=127.0.0.1:52414
2016/03/28 05:54:17 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (566.334µs) from=127.0.0.1:52416
2016/03/28 05:54:17 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (303.924333ms) from=127.0.0.1:52417
2016/03/28 05:54:17 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (311.121666ms) from=127.0.0.1:52415
2016/03/28 05:54:17 [DEBUG] http: Request DELETE /v1/kv/test/prefix/6593e29b-7e6d-36ba-4a0f-9a3801216e09 (297.661334ms) from=127.0.0.1:52418
2016/03/28 05:54:17 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (625.334µs) from=127.0.0.1:52419
2016/03/28 05:54:18 [DEBUG] http: Request PUT /v1/session/destroy/6593e29b-7e6d-36ba-4a0f-9a3801216e09 (210.015333ms) from=127.0.0.1:52420
2016/03/28 05:54:18 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (405.369ms) from=127.0.0.1:52421
2016/03/28 05:54:18 [DEBUG] http: Shutting down http server (127.0.0.1:10641)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Arg (4.53s)
=== RUN   TestMaintCommand_implements
--- PASS: TestMaintCommand_implements (0.00s)
=== RUN   TestMaintCommandRun_ConflictingArgs
--- PASS: TestMaintCommandRun_ConflictingArgs (0.00s)
=== RUN   TestMaintCommandRun_NoArgs
2016/03/28 05:54:20 [DEBUG] http: Request GET /v1/agent/self (996µs) from=127.0.0.1:43907
2016/03/28 05:54:20 [DEBUG] http: Request GET /v1/agent/checks (298.334µs) from=127.0.0.1:43908
2016/03/28 05:54:21 [DEBUG] http: Shutting down http server (127.0.0.1:10651)
--- PASS: TestMaintCommandRun_NoArgs (2.83s)
=== RUN   TestMaintCommandRun_EnableNodeMaintenance
2016/03/28 05:54:22 [DEBUG] http: Request GET /v1/agent/self (638.333µs) from=127.0.0.1:40372
2016/03/28 05:54:22 [ERR] agent: failed to sync changes: No cluster leader
2016/03/28 05:54:22 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=true&reason=broken (1.225334ms) from=127.0.0.1:40373
2016/03/28 05:54:23 [DEBUG] http: Shutting down http server (127.0.0.1:10661)
--- PASS: TestMaintCommandRun_EnableNodeMaintenance (1.81s)
=== RUN   TestMaintCommandRun_DisableNodeMaintenance
2016/03/28 05:54:23 [DEBUG] http: Request GET /v1/agent/self (654.334µs) from=127.0.0.1:33079
2016/03/28 05:54:23 [ERR] agent: failed to sync changes: No cluster leader
2016/03/28 05:54:23 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=false (225.667µs) from=127.0.0.1:33080
2016/03/28 05:54:24 [DEBUG] http: Shutting down http server (127.0.0.1:10671)
--- PASS: TestMaintCommandRun_DisableNodeMaintenance (1.66s)
=== RUN   TestMaintCommandRun_EnableServiceMaintenance
2016/03/28 05:54:25 [DEBUG] http: Request GET /v1/agent/self (639µs) from=127.0.0.1:49027
2016/03/28 05:54:25 [ERR] agent: failed to sync changes: No cluster leader
2016/03/28 05:54:25 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=true&reason=broken (1.186334ms) from=127.0.0.1:49028
2016/03/28 05:54:26 [DEBUG] http: Shutting down http server (127.0.0.1:10681)
--- PASS: TestMaintCommandRun_EnableServiceMaintenance (1.47s)
=== RUN   TestMaintCommandRun_DisableServiceMaintenance
2016/03/28 05:54:26 [DEBUG] http: Request GET /v1/agent/self (613.334µs) from=127.0.0.1:55828
2016/03/28 05:54:26 [ERR] agent: failed to sync changes: No cluster leader
2016/03/28 05:54:26 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=false (308.334µs) from=127.0.0.1:55829
2016/03/28 05:54:27 [DEBUG] http: Shutting down http server (127.0.0.1:10691)
--- PASS: TestMaintCommandRun_DisableServiceMaintenance (1.56s)
=== RUN   TestMaintCommandRun_ServiceMaintenance_NoService
2016/03/28 05:54:28 [DEBUG] http: Request GET /v1/agent/self (626µs) from=127.0.0.1:34811
2016/03/28 05:54:28 [DEBUG] http: Request PUT /v1/agent/service/maintenance/redis?enable=true&reason=broken (121.667µs) from=127.0.0.1:34812
2016/03/28 05:54:29 [DEBUG] http: Shutting down http server (127.0.0.1:10701)
--- PASS: TestMaintCommandRun_ServiceMaintenance_NoService (1.92s)
=== RUN   TestMembersCommand_implements
--- PASS: TestMembersCommand_implements (0.00s)
=== RUN   TestMembersCommandRun
2016/03/28 05:54:30 [INFO] agent.rpc: Accepted client: 127.0.0.1:53334
2016/03/28 05:54:31 [DEBUG] http: Shutting down http server (127.0.0.1:10711)
--- PASS: TestMembersCommandRun (1.74s)
=== RUN   TestMembersCommandRun_WAN
2016/03/28 05:54:32 [INFO] agent.rpc: Accepted client: 127.0.0.1:36073
2016/03/28 05:54:33 [DEBUG] http: Shutting down http server (127.0.0.1:10721)
--- PASS: TestMembersCommandRun_WAN (2.02s)
=== RUN   TestMembersCommandRun_statusFilter
2016/03/28 05:54:34 [INFO] agent.rpc: Accepted client: 127.0.0.1:48957
2016/03/28 05:54:35 [DEBUG] http: Shutting down http server (127.0.0.1:10731)
--- PASS: TestMembersCommandRun_statusFilter (1.95s)
=== RUN   TestMembersCommandRun_statusFilter_failed
2016/03/28 05:54:36 [INFO] agent.rpc: Accepted client: 127.0.0.1:49802
2016/03/28 05:54:36 [DEBUG] http: Shutting down http server (127.0.0.1:10741)
--- PASS: TestMembersCommandRun_statusFilter_failed (1.27s)
=== RUN   TestReloadCommand_implements
--- PASS: TestReloadCommand_implements (0.00s)
=== RUN   TestReloadCommandRun
2016/03/28 05:54:37 [INFO] agent.rpc: Accepted client: 127.0.0.1:44804
2016/03/28 05:54:37 [DEBUG] http: Shutting down http server (127.0.0.1:10751)
--- PASS: TestReloadCommandRun (1.20s)
=== RUN   TestAddrFlag_default
--- PASS: TestAddrFlag_default (0.00s)
=== RUN   TestAddrFlag_onlyEnv
--- PASS: TestAddrFlag_onlyEnv (0.00s)
=== RUN   TestAddrFlag_precedence
--- PASS: TestAddrFlag_precedence (0.00s)
=== RUN   TestRTTCommand_Implements
--- PASS: TestRTTCommand_Implements (0.00s)
=== RUN   TestRTTCommand_Run_BadArgs
--- PASS: TestRTTCommand_Run_BadArgs (0.00s)
=== RUN   TestRTTCommand_Run_LAN
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53791
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (264.334µs) from=127.0.0.1:53791
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53792
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (311.667µs) from=127.0.0.1:53792
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53793
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (244.333µs) from=127.0.0.1:53793
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53794
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:53794
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53795
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (240.667µs) from=127.0.0.1:53795
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53796
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (285.666µs) from=127.0.0.1:53796
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53797
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (234.667µs) from=127.0.0.1:53797
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53798
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:53798
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53799
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (221.666µs) from=127.0.0.1:53799
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53800
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:53800
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53801
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (286.666µs) from=127.0.0.1:53801
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53802
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (247.334µs) from=127.0.0.1:53802
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53803
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:53803
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53804
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (252µs) from=127.0.0.1:53804
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53805
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (291.666µs) from=127.0.0.1:53805
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53806
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (327.667µs) from=127.0.0.1:53806
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53807
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:53807
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53808
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:53808
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53809
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (223µs) from=127.0.0.1:53809
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53810
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (216.333µs) from=127.0.0.1:53810
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53811
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (234µs) from=127.0.0.1:53811
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53812
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (241.666µs) from=127.0.0.1:53812
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53813
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (241.666µs) from=127.0.0.1:53813
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53814
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:53814
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53815
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (222.333µs) from=127.0.0.1:53815
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53816
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (383.667µs) from=127.0.0.1:53816
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53817
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (231µs) from=127.0.0.1:53817
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53818
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (230.333µs) from=127.0.0.1:53818
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53819
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (233.667µs) from=127.0.0.1:53819
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53820
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (222µs) from=127.0.0.1:53820
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53821
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (230.333µs) from=127.0.0.1:53821
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53822
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (243.667µs) from=127.0.0.1:53822
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53823
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:53823
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53824
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:53824
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53825
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (338.333µs) from=127.0.0.1:53825
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53826
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (253µs) from=127.0.0.1:53826
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53827
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (231.667µs) from=127.0.0.1:53827
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53828
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (246.333µs) from=127.0.0.1:53828
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53829
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:53829
2016/03/28 05:54:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53830
2016/03/28 05:54:38 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:53830
2016/03/28 05:54:39 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53831
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (243µs) from=127.0.0.1:53831
2016/03/28 05:54:39 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53832
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:53832
2016/03/28 05:54:39 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53833
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (330.333µs) from=127.0.0.1:53833
2016/03/28 05:54:39 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:53834
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:53834
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (238µs) from=127.0.0.1:53835
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (439.667µs) from=127.0.0.1:53836
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (228.666µs) from=127.0.0.1:53837
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:53838
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (328.333µs) from=127.0.0.1:53839
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (228µs) from=127.0.0.1:53840
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:53841
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (222.667µs) from=127.0.0.1:53842
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:53843
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (246.333µs) from=127.0.0.1:53844
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (220.666µs) from=127.0.0.1:53845
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (216.333µs) from=127.0.0.1:53846
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:53847
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (240µs) from=127.0.0.1:53848
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:53849
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (211µs) from=127.0.0.1:53850
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:53851
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:53852
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (206µs) from=127.0.0.1:53853
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (217.667µs) from=127.0.0.1:53854
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (212.334µs) from=127.0.0.1:53855
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:53856
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (235µs) from=127.0.0.1:53857
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (222µs) from=127.0.0.1:53858
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (208.333µs) from=127.0.0.1:53859
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:53860
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (240.333µs) from=127.0.0.1:53861
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:53862
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (201µs) from=127.0.0.1:53863
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (199.333µs) from=127.0.0.1:53864
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (207µs) from=127.0.0.1:53865
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (202.667µs) from=127.0.0.1:53866
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (208.334µs) from=127.0.0.1:53867
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:53868
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (244.667µs) from=127.0.0.1:53869
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:53870
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (204.667µs) from=127.0.0.1:53871
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (204.666µs) from=127.0.0.1:53872
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203.333µs) from=127.0.0.1:53873
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (201.666µs) from=127.0.0.1:53874
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:53875
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203.333µs) from=127.0.0.1:53876
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (203.333µs) from=127.0.0.1:53877
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (212.666µs) from=127.0.0.1:53878
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:53879
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:53880
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (334.667µs) from=127.0.0.1:53881
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (228.333µs) from=127.0.0.1:53882
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:53883
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:53884
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (286.333µs) from=127.0.0.1:53885
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (207.334µs) from=127.0.0.1:53886
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (213.333µs) from=127.0.0.1:53887
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (238.334µs) from=127.0.0.1:53888
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (225µs) from=127.0.0.1:53889
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (244.333µs) from=127.0.0.1:53890
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:53891
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (214µs) from=127.0.0.1:53892
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (217µs) from=127.0.0.1:53893
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:53894
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:53895
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (212.667µs) from=127.0.0.1:53896
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (209µs) from=127.0.0.1:53897
2016/03/28 05:54:39 [DEBUG] http: Request GET /v1/catalog/nodes (252.333µs) from=127.0.0.1:53898
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/coordinate/nodes (354.334µs) from=127.0.0.1:53899
2016/03/28 05:54:40 [DEBUG] http: Shutting down http server (127.0.0.1:10761)
--- FAIL: TestRTTCommand_Run_LAN (2.41s)
	rtt_test.go:105: bad: 1: "Could not find a coordinate for node \"Node 36\"\n"
=== RUN   TestRTTCommand_Run_WAN
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44779
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (348.333µs) from=127.0.0.1:44779
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44780
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (231µs) from=127.0.0.1:44780
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44781
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (245.667µs) from=127.0.0.1:44781
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44782
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:44782
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44783
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (237µs) from=127.0.0.1:44783
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44784
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (227.333µs) from=127.0.0.1:44784
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44785
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (244.666µs) from=127.0.0.1:44785
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44786
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (227µs) from=127.0.0.1:44786
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44787
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (229µs) from=127.0.0.1:44787
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44788
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (234.666µs) from=127.0.0.1:44788
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44789
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:44789
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44790
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (308.334µs) from=127.0.0.1:44790
2016/03/28 05:54:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44791
2016/03/28 05:54:40 [DEBUG] http: Request GET /v1/catalog/nodes (222.667µs) from=127.0.0.1:44791
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44792
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (218.334µs) from=127.0.0.1:44792
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44793
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (333µs) from=127.0.0.1:44793
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44794
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (250µs) from=127.0.0.1:44794
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44795
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:44795
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44796
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:44796
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44797
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (229.666µs) from=127.0.0.1:44797
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44798
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (227.334µs) from=127.0.0.1:44798
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44799
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (224µs) from=127.0.0.1:44799
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44800
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (372.333µs) from=127.0.0.1:44800
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44801
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (213.666µs) from=127.0.0.1:44801
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44802
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (228.666µs) from=127.0.0.1:44802
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44803
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (245.666µs) from=127.0.0.1:44803
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44804
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:44804
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44805
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (235µs) from=127.0.0.1:44805
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44806
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (223.334µs) from=127.0.0.1:44806
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44807
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (246.333µs) from=127.0.0.1:44807
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44808
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (254.333µs) from=127.0.0.1:44808
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44809
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:44809
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44810
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:44810
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44811
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (221µs) from=127.0.0.1:44811
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44812
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (238.667µs) from=127.0.0.1:44812
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44813
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (226.667µs) from=127.0.0.1:44813
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44814
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (224.333µs) from=127.0.0.1:44814
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44815
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:44815
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44816
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (228µs) from=127.0.0.1:44816
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44817
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (218µs) from=127.0.0.1:44817
2016/03/28 05:54:41 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:44818
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (226µs) from=127.0.0.1:44818
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (234.666µs) from=127.0.0.1:44819
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (212.333µs) from=127.0.0.1:44820
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (218.333µs) from=127.0.0.1:44821
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (241µs) from=127.0.0.1:44822
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (206µs) from=127.0.0.1:44823
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (216.333µs) from=127.0.0.1:44824
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (223.333µs) from=127.0.0.1:44825
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (216.667µs) from=127.0.0.1:44826
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (203µs) from=127.0.0.1:44827
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (204µs) from=127.0.0.1:44828
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (289.333µs) from=127.0.0.1:44829
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (219µs) from=127.0.0.1:44830
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (229µs) from=127.0.0.1:44831
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:44832
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (213µs) from=127.0.0.1:44833
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (210.666µs) from=127.0.0.1:44834
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (210.333µs) from=127.0.0.1:44835
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:44836
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (219.667µs) from=127.0.0.1:44837
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (210µs) from=127.0.0.1:44838
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (219.334µs) from=127.0.0.1:44839
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (207.667µs) from=127.0.0.1:44840
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (236.333µs) from=127.0.0.1:44841
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (297.333µs) from=127.0.0.1:44842
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (238.333µs) from=127.0.0.1:44843
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (225.334µs) from=127.0.0.1:44844
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (212.667µs) from=127.0.0.1:44845
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:44846
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (246.666µs) from=127.0.0.1:44847
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (203.667µs) from=127.0.0.1:44848
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (202.333µs) from=127.0.0.1:44849
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (215µs) from=127.0.0.1:44850
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (220.666µs) from=127.0.0.1:44851
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (211.333µs) from=127.0.0.1:44852
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (220.667µs) from=127.0.0.1:44853
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (208µs) from=127.0.0.1:44854
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (201.667µs) from=127.0.0.1:44855
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (208.667µs) from=127.0.0.1:44856
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (208.666µs) from=127.0.0.1:44857
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/catalog/nodes (232.667µs) from=127.0.0.1:44858
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/coordinate/datacenters (652.334µs) from=127.0.0.1:44859
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/agent/self (611.334µs) from=127.0.0.1:44860
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/coordinate/datacenters (214µs) from=127.0.0.1:44861
2016/03/28 05:54:41 [DEBUG] http: Request GET /v1/coordinate/datacenters (215µs) from=127.0.0.1:44862
2016/03/28 05:54:42 [DEBUG] http: Shutting down http server (127.0.0.1:10771)
--- PASS: TestRTTCommand_Run_WAN (1.65s)
=== RUN   TestVersionCommand_implements
--- PASS: TestVersionCommand_implements (0.00s)
=== RUN   TestWatchCommand_implements
--- PASS: TestWatchCommand_implements (0.00s)
=== RUN   TestWatchCommandRun
2016/03/28 05:54:42 [DEBUG] http: Request GET /v1/agent/self (735.667µs) from=127.0.0.1:51771
2016/03/28 05:54:42 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51772
2016/03/28 05:54:42 [DEBUG] http: Request GET /v1/catalog/nodes (408µs) from=127.0.0.1:51772
2016/03/28 05:54:42 consul.watch: Watch (type: nodes) errored: Unexpected response code: 500 (No cluster leader), retry in 5s
2016/03/28 05:54:47 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:51775
2016/03/28 05:54:47 [DEBUG] http: Shutting down http server (127.0.0.1:10781)
--- PASS: TestWatchCommandRun (5.84s)
FAIL
FAIL	github.com/hashicorp/consul/command	102.447s
FAIL	github.com/hashicorp/consul/command/agent [build failed]
=== RUN   TestACLEndpoint_Apply
2016/03/28 05:53:13 [INFO] raft: Node at 127.0.0.1:15001 [Follower] entering Follower state
2016/03/28 05:53:13 [INFO] serf: EventMemberJoin: Node 15000 127.0.0.1
2016/03/28 05:53:13 [INFO] consul: adding LAN server Node 15000 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/03/28 05:53:13 [INFO] serf: EventMemberJoin: Node 15000.dc1 127.0.0.1
2016/03/28 05:53:13 [INFO] consul: adding WAN server Node 15000.dc1 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/03/28 05:53:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:13 [INFO] raft: Node at 127.0.0.1:15001 [Candidate] entering Candidate state
2016/03/28 05:53:13 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:13 [DEBUG] raft: Vote granted from 127.0.0.1:15001. Tally: 1
2016/03/28 05:53:13 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:13 [INFO] raft: Node at 127.0.0.1:15001 [Leader] entering Leader state
2016/03/28 05:53:13 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:13 [INFO] consul: New leader elected: Node 15000
2016/03/28 05:53:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:14 [DEBUG] raft: Node 127.0.0.1:15001 updated peer set (2): [127.0.0.1:15001]
2016/03/28 05:53:14 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:14 [INFO] consul: member 'Node 15000' joined, marking health alive
2016/03/28 05:53:15 [INFO] consul: shutting down server
2016/03/28 05:53:15 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:15 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:16 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply (3.67s)
=== RUN   TestACLEndpoint_Update_PurgeCache
2016/03/28 05:53:16 [INFO] raft: Node at 127.0.0.1:15005 [Follower] entering Follower state
2016/03/28 05:53:16 [INFO] serf: EventMemberJoin: Node 15004 127.0.0.1
2016/03/28 05:53:16 [INFO] consul: adding LAN server Node 15004 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/03/28 05:53:16 [INFO] serf: EventMemberJoin: Node 15004.dc1 127.0.0.1
2016/03/28 05:53:16 [INFO] consul: adding WAN server Node 15004.dc1 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/03/28 05:53:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:16 [INFO] raft: Node at 127.0.0.1:15005 [Candidate] entering Candidate state
2016/03/28 05:53:17 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:17 [DEBUG] raft: Vote granted from 127.0.0.1:15005. Tally: 1
2016/03/28 05:53:17 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:17 [INFO] raft: Node at 127.0.0.1:15005 [Leader] entering Leader state
2016/03/28 05:53:17 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:17 [INFO] consul: New leader elected: Node 15004
2016/03/28 05:53:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:17 [DEBUG] raft: Node 127.0.0.1:15005 updated peer set (2): [127.0.0.1:15005]
2016/03/28 05:53:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:18 [INFO] consul: member 'Node 15004' joined, marking health alive
2016/03/28 05:53:20 [INFO] consul: shutting down server
2016/03/28 05:53:20 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:20 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:20 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Update_PurgeCache (4.70s)
=== RUN   TestACLEndpoint_Apply_CustomID
2016/03/28 05:53:21 [INFO] raft: Node at 127.0.0.1:15009 [Follower] entering Follower state
2016/03/28 05:53:21 [INFO] serf: EventMemberJoin: Node 15008 127.0.0.1
2016/03/28 05:53:21 [INFO] consul: adding LAN server Node 15008 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/03/28 05:53:21 [INFO] serf: EventMemberJoin: Node 15008.dc1 127.0.0.1
2016/03/28 05:53:21 [INFO] consul: adding WAN server Node 15008.dc1 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/03/28 05:53:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:21 [INFO] raft: Node at 127.0.0.1:15009 [Candidate] entering Candidate state
2016/03/28 05:53:22 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:22 [DEBUG] raft: Vote granted from 127.0.0.1:15009. Tally: 1
2016/03/28 05:53:22 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:22 [INFO] raft: Node at 127.0.0.1:15009 [Leader] entering Leader state
2016/03/28 05:53:22 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:22 [INFO] consul: New leader elected: Node 15008
2016/03/28 05:53:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:22 [DEBUG] raft: Node 127.0.0.1:15009 updated peer set (2): [127.0.0.1:15009]
2016/03/28 05:53:22 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:23 [INFO] consul: member 'Node 15008' joined, marking health alive
2016/03/28 05:53:24 [INFO] consul: shutting down server
2016/03/28 05:53:24 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:25 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:25 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:25 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_CustomID (4.48s)
=== RUN   TestACLEndpoint_Apply_Denied
2016/03/28 05:53:26 [INFO] raft: Node at 127.0.0.1:15013 [Follower] entering Follower state
2016/03/28 05:53:26 [INFO] serf: EventMemberJoin: Node 15012 127.0.0.1
2016/03/28 05:53:26 [INFO] consul: adding LAN server Node 15012 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/03/28 05:53:26 [INFO] serf: EventMemberJoin: Node 15012.dc1 127.0.0.1
2016/03/28 05:53:26 [INFO] consul: adding WAN server Node 15012.dc1 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/03/28 05:53:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:26 [INFO] raft: Node at 127.0.0.1:15013 [Candidate] entering Candidate state
2016/03/28 05:53:27 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:27 [DEBUG] raft: Vote granted from 127.0.0.1:15013. Tally: 1
2016/03/28 05:53:27 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:27 [INFO] raft: Node at 127.0.0.1:15013 [Leader] entering Leader state
2016/03/28 05:53:27 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:27 [INFO] consul: New leader elected: Node 15012
2016/03/28 05:53:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:27 [DEBUG] raft: Node 127.0.0.1:15013 updated peer set (2): [127.0.0.1:15013]
2016/03/28 05:53:27 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:28 [INFO] consul: member 'Node 15012' joined, marking health alive
2016/03/28 05:53:28 [INFO] consul: shutting down server
2016/03/28 05:53:28 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:28 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:28 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:28 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_Denied (3.46s)
=== RUN   TestACLEndpoint_Apply_DeleteAnon
2016/03/28 05:53:29 [INFO] raft: Node at 127.0.0.1:15017 [Follower] entering Follower state
2016/03/28 05:53:29 [INFO] serf: EventMemberJoin: Node 15016 127.0.0.1
2016/03/28 05:53:29 [INFO] consul: adding LAN server Node 15016 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/03/28 05:53:29 [INFO] serf: EventMemberJoin: Node 15016.dc1 127.0.0.1
2016/03/28 05:53:29 [INFO] consul: adding WAN server Node 15016.dc1 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/03/28 05:53:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:29 [INFO] raft: Node at 127.0.0.1:15017 [Candidate] entering Candidate state
2016/03/28 05:53:30 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:30 [DEBUG] raft: Vote granted from 127.0.0.1:15017. Tally: 1
2016/03/28 05:53:30 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:30 [INFO] raft: Node at 127.0.0.1:15017 [Leader] entering Leader state
2016/03/28 05:53:30 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:30 [INFO] consul: New leader elected: Node 15016
2016/03/28 05:53:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:30 [DEBUG] raft: Node 127.0.0.1:15017 updated peer set (2): [127.0.0.1:15017]
2016/03/28 05:53:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:30 [INFO] consul: member 'Node 15016' joined, marking health alive
2016/03/28 05:53:31 [INFO] consul: shutting down server
2016/03/28 05:53:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:31 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_DeleteAnon (2.90s)
=== RUN   TestACLEndpoint_Apply_RootChange
2016/03/28 05:53:32 [INFO] raft: Node at 127.0.0.1:15021 [Follower] entering Follower state
2016/03/28 05:53:32 [INFO] serf: EventMemberJoin: Node 15020 127.0.0.1
2016/03/28 05:53:32 [INFO] consul: adding LAN server Node 15020 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/03/28 05:53:32 [INFO] serf: EventMemberJoin: Node 15020.dc1 127.0.0.1
2016/03/28 05:53:32 [INFO] consul: adding WAN server Node 15020.dc1 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/03/28 05:53:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:32 [INFO] raft: Node at 127.0.0.1:15021 [Candidate] entering Candidate state
2016/03/28 05:53:33 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:33 [DEBUG] raft: Vote granted from 127.0.0.1:15021. Tally: 1
2016/03/28 05:53:33 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:33 [INFO] raft: Node at 127.0.0.1:15021 [Leader] entering Leader state
2016/03/28 05:53:33 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:33 [INFO] consul: New leader elected: Node 15020
2016/03/28 05:53:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:34 [DEBUG] raft: Node 127.0.0.1:15021 updated peer set (2): [127.0.0.1:15021]
2016/03/28 05:53:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:34 [INFO] consul: member 'Node 15020' joined, marking health alive
2016/03/28 05:53:34 [INFO] consul: shutting down server
2016/03/28 05:53:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:35 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:35 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_RootChange (3.55s)
=== RUN   TestACLEndpoint_Get
2016/03/28 05:53:36 [INFO] raft: Node at 127.0.0.1:15025 [Follower] entering Follower state
2016/03/28 05:53:36 [INFO] serf: EventMemberJoin: Node 15024 127.0.0.1
2016/03/28 05:53:36 [INFO] consul: adding LAN server Node 15024 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/03/28 05:53:36 [INFO] serf: EventMemberJoin: Node 15024.dc1 127.0.0.1
2016/03/28 05:53:36 [INFO] consul: adding WAN server Node 15024.dc1 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/03/28 05:53:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:36 [INFO] raft: Node at 127.0.0.1:15025 [Candidate] entering Candidate state
2016/03/28 05:53:37 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:37 [DEBUG] raft: Vote granted from 127.0.0.1:15025. Tally: 1
2016/03/28 05:53:37 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:37 [INFO] raft: Node at 127.0.0.1:15025 [Leader] entering Leader state
2016/03/28 05:53:37 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:37 [INFO] consul: New leader elected: Node 15024
2016/03/28 05:53:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:37 [DEBUG] raft: Node 127.0.0.1:15025 updated peer set (2): [127.0.0.1:15025]
2016/03/28 05:53:37 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:37 [INFO] consul: member 'Node 15024' joined, marking health alive
2016/03/28 05:53:39 [INFO] consul: shutting down server
2016/03/28 05:53:39 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:39 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:39 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:39 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Get (4.41s)
=== RUN   TestACLEndpoint_GetPolicy
2016/03/28 05:53:40 [INFO] raft: Node at 127.0.0.1:15029 [Follower] entering Follower state
2016/03/28 05:53:40 [INFO] serf: EventMemberJoin: Node 15028 127.0.0.1
2016/03/28 05:53:40 [INFO] consul: adding LAN server Node 15028 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/03/28 05:53:40 [INFO] serf: EventMemberJoin: Node 15028.dc1 127.0.0.1
2016/03/28 05:53:40 [INFO] consul: adding WAN server Node 15028.dc1 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/03/28 05:53:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:40 [INFO] raft: Node at 127.0.0.1:15029 [Candidate] entering Candidate state
2016/03/28 05:53:40 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:40 [DEBUG] raft: Vote granted from 127.0.0.1:15029. Tally: 1
2016/03/28 05:53:40 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:40 [INFO] raft: Node at 127.0.0.1:15029 [Leader] entering Leader state
2016/03/28 05:53:40 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:40 [INFO] consul: New leader elected: Node 15028
2016/03/28 05:53:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:41 [DEBUG] raft: Node 127.0.0.1:15029 updated peer set (2): [127.0.0.1:15029]
2016/03/28 05:53:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:41 [INFO] consul: member 'Node 15028' joined, marking health alive
2016/03/28 05:53:42 [INFO] consul: shutting down server
2016/03/28 05:53:42 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:43 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:43 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_GetPolicy (3.65s)
=== RUN   TestACLEndpoint_List
2016/03/28 05:53:44 [INFO] raft: Node at 127.0.0.1:15033 [Follower] entering Follower state
2016/03/28 05:53:44 [INFO] serf: EventMemberJoin: Node 15032 127.0.0.1
2016/03/28 05:53:44 [INFO] consul: adding LAN server Node 15032 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/03/28 05:53:44 [INFO] serf: EventMemberJoin: Node 15032.dc1 127.0.0.1
2016/03/28 05:53:44 [INFO] consul: adding WAN server Node 15032.dc1 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/03/28 05:53:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:44 [INFO] raft: Node at 127.0.0.1:15033 [Candidate] entering Candidate state
2016/03/28 05:53:45 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:45 [DEBUG] raft: Vote granted from 127.0.0.1:15033. Tally: 1
2016/03/28 05:53:45 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:45 [INFO] raft: Node at 127.0.0.1:15033 [Leader] entering Leader state
2016/03/28 05:53:45 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:45 [INFO] consul: New leader elected: Node 15032
2016/03/28 05:53:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:46 [DEBUG] raft: Node 127.0.0.1:15033 updated peer set (2): [127.0.0.1:15033]
2016/03/28 05:53:46 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:46 [INFO] consul: member 'Node 15032' joined, marking health alive
2016/03/28 05:53:50 [INFO] consul: shutting down server
2016/03/28 05:53:50 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:50 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:50 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:53:50 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_List (7.29s)
=== RUN   TestACLEndpoint_List_Denied
2016/03/28 05:53:51 [INFO] raft: Node at 127.0.0.1:15037 [Follower] entering Follower state
2016/03/28 05:53:51 [INFO] serf: EventMemberJoin: Node 15036 127.0.0.1
2016/03/28 05:53:51 [INFO] consul: adding LAN server Node 15036 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/03/28 05:53:51 [INFO] serf: EventMemberJoin: Node 15036.dc1 127.0.0.1
2016/03/28 05:53:51 [INFO] consul: adding WAN server Node 15036.dc1 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/03/28 05:53:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:51 [INFO] raft: Node at 127.0.0.1:15037 [Candidate] entering Candidate state
2016/03/28 05:53:52 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:52 [DEBUG] raft: Vote granted from 127.0.0.1:15037. Tally: 1
2016/03/28 05:53:52 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:52 [INFO] raft: Node at 127.0.0.1:15037 [Leader] entering Leader state
2016/03/28 05:53:52 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:52 [INFO] consul: New leader elected: Node 15036
2016/03/28 05:53:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:52 [DEBUG] raft: Node 127.0.0.1:15037 updated peer set (2): [127.0.0.1:15037]
2016/03/28 05:53:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:53 [INFO] consul: member 'Node 15036' joined, marking health alive
2016/03/28 05:53:53 [INFO] consul: shutting down server
2016/03/28 05:53:53 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:53 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:53 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_List_Denied (3.39s)
=== RUN   TestACL_Disabled
2016/03/28 05:53:55 [INFO] raft: Node at 127.0.0.1:15041 [Follower] entering Follower state
2016/03/28 05:53:55 [INFO] serf: EventMemberJoin: Node 15040 127.0.0.1
2016/03/28 05:53:55 [INFO] consul: adding LAN server Node 15040 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/03/28 05:53:55 [INFO] serf: EventMemberJoin: Node 15040.dc1 127.0.0.1
2016/03/28 05:53:55 [INFO] consul: adding WAN server Node 15040.dc1 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/03/28 05:53:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:55 [INFO] raft: Node at 127.0.0.1:15041 [Candidate] entering Candidate state
2016/03/28 05:53:55 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:55 [DEBUG] raft: Vote granted from 127.0.0.1:15041. Tally: 1
2016/03/28 05:53:55 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:55 [INFO] raft: Node at 127.0.0.1:15041 [Leader] entering Leader state
2016/03/28 05:53:55 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:55 [INFO] consul: New leader elected: Node 15040
2016/03/28 05:53:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:53:55 [DEBUG] raft: Node 127.0.0.1:15041 updated peer set (2): [127.0.0.1:15041]
2016/03/28 05:53:55 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:53:55 [INFO] consul: member 'Node 15040' joined, marking health alive
2016/03/28 05:53:56 [INFO] consul: shutting down server
2016/03/28 05:53:56 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:56 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:56 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Disabled (2.49s)
=== RUN   TestACL_ResolveRootACL
2016/03/28 05:53:57 [INFO] raft: Node at 127.0.0.1:15045 [Follower] entering Follower state
2016/03/28 05:53:57 [INFO] serf: EventMemberJoin: Node 15044 127.0.0.1
2016/03/28 05:53:57 [INFO] consul: adding LAN server Node 15044 (Addr: 127.0.0.1:15045) (DC: dc1)
2016/03/28 05:53:57 [INFO] serf: EventMemberJoin: Node 15044.dc1 127.0.0.1
2016/03/28 05:53:57 [INFO] consul: shutting down server
2016/03/28 05:53:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:57 [INFO] raft: Node at 127.0.0.1:15045 [Candidate] entering Candidate state
2016/03/28 05:53:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:53:57 [DEBUG] raft: Votes needed: 1
--- PASS: TestACL_ResolveRootACL (1.46s)
=== RUN   TestACL_Authority_NotFound
2016/03/28 05:53:58 [INFO] raft: Node at 127.0.0.1:15049 [Follower] entering Follower state
2016/03/28 05:53:58 [INFO] serf: EventMemberJoin: Node 15048 127.0.0.1
2016/03/28 05:53:58 [INFO] consul: adding LAN server Node 15048 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/03/28 05:53:58 [INFO] serf: EventMemberJoin: Node 15048.dc1 127.0.0.1
2016/03/28 05:53:58 [INFO] consul: adding WAN server Node 15048.dc1 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/03/28 05:53:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:53:58 [INFO] raft: Node at 127.0.0.1:15049 [Candidate] entering Candidate state
2016/03/28 05:53:59 [DEBUG] raft: Votes needed: 1
2016/03/28 05:53:59 [DEBUG] raft: Vote granted from 127.0.0.1:15049. Tally: 1
2016/03/28 05:53:59 [INFO] raft: Election won. Tally: 1
2016/03/28 05:53:59 [INFO] raft: Node at 127.0.0.1:15049 [Leader] entering Leader state
2016/03/28 05:53:59 [INFO] consul: cluster leadership acquired
2016/03/28 05:53:59 [INFO] consul: New leader elected: Node 15048
2016/03/28 05:54:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:00 [DEBUG] raft: Node 127.0.0.1:15049 updated peer set (2): [127.0.0.1:15049]
2016/03/28 05:54:00 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:00 [INFO] consul: member 'Node 15048' joined, marking health alive
2016/03/28 05:54:01 [INFO] consul: shutting down server
2016/03/28 05:54:01 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_Authority_NotFound (3.38s)
=== RUN   TestACL_Authority_Found
2016/03/28 05:54:02 [INFO] raft: Node at 127.0.0.1:15053 [Follower] entering Follower state
2016/03/28 05:54:02 [INFO] serf: EventMemberJoin: Node 15052 127.0.0.1
2016/03/28 05:54:02 [INFO] consul: adding LAN server Node 15052 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/03/28 05:54:02 [INFO] serf: EventMemberJoin: Node 15052.dc1 127.0.0.1
2016/03/28 05:54:02 [INFO] consul: adding WAN server Node 15052.dc1 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/03/28 05:54:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:02 [INFO] raft: Node at 127.0.0.1:15053 [Candidate] entering Candidate state
2016/03/28 05:54:04 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:04 [DEBUG] raft: Vote granted from 127.0.0.1:15053. Tally: 1
2016/03/28 05:54:04 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:04 [INFO] raft: Node at 127.0.0.1:15053 [Leader] entering Leader state
2016/03/28 05:54:04 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:04 [INFO] consul: New leader elected: Node 15052
2016/03/28 05:54:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:04 [DEBUG] raft: Node 127.0.0.1:15053 updated peer set (2): [127.0.0.1:15053]
2016/03/28 05:54:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:05 [INFO] consul: member 'Node 15052' joined, marking health alive
2016/03/28 05:54:06 [INFO] consul: shutting down server
2016/03/28 05:54:06 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:06 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:06 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Found (5.71s)
=== RUN   TestACL_Authority_Anonymous_Found
2016/03/28 05:54:07 [INFO] raft: Node at 127.0.0.1:15057 [Follower] entering Follower state
2016/03/28 05:54:07 [INFO] serf: EventMemberJoin: Node 15056 127.0.0.1
2016/03/28 05:54:07 [INFO] consul: adding LAN server Node 15056 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/03/28 05:54:07 [INFO] serf: EventMemberJoin: Node 15056.dc1 127.0.0.1
2016/03/28 05:54:07 [INFO] consul: adding WAN server Node 15056.dc1 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/03/28 05:54:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:07 [INFO] raft: Node at 127.0.0.1:15057 [Candidate] entering Candidate state
2016/03/28 05:54:08 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:08 [DEBUG] raft: Vote granted from 127.0.0.1:15057. Tally: 1
2016/03/28 05:54:08 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:08 [INFO] raft: Node at 127.0.0.1:15057 [Leader] entering Leader state
2016/03/28 05:54:08 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:08 [INFO] consul: New leader elected: Node 15056
2016/03/28 05:54:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:08 [DEBUG] raft: Node 127.0.0.1:15057 updated peer set (2): [127.0.0.1:15057]
2016/03/28 05:54:08 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:09 [INFO] consul: member 'Node 15056' joined, marking health alive
2016/03/28 05:54:09 [INFO] consul: shutting down server
2016/03/28 05:54:09 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:10 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:10 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:54:10 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Authority_Anonymous_Found (3.15s)
=== RUN   TestACL_Authority_Master_Found
2016/03/28 05:54:10 [INFO] raft: Node at 127.0.0.1:15061 [Follower] entering Follower state
2016/03/28 05:54:10 [INFO] serf: EventMemberJoin: Node 15060 127.0.0.1
2016/03/28 05:54:10 [INFO] consul: adding LAN server Node 15060 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/03/28 05:54:10 [INFO] serf: EventMemberJoin: Node 15060.dc1 127.0.0.1
2016/03/28 05:54:10 [INFO] consul: adding WAN server Node 15060.dc1 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/03/28 05:54:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:11 [INFO] raft: Node at 127.0.0.1:15061 [Candidate] entering Candidate state
2016/03/28 05:54:12 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:12 [DEBUG] raft: Vote granted from 127.0.0.1:15061. Tally: 1
2016/03/28 05:54:12 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:12 [INFO] raft: Node at 127.0.0.1:15061 [Leader] entering Leader state
2016/03/28 05:54:12 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:12 [INFO] consul: New leader elected: Node 15060
2016/03/28 05:54:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:12 [DEBUG] raft: Node 127.0.0.1:15061 updated peer set (2): [127.0.0.1:15061]
2016/03/28 05:54:12 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:13 [INFO] consul: member 'Node 15060' joined, marking health alive
2016/03/28 05:54:13 [INFO] consul: shutting down server
2016/03/28 05:54:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:13 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Master_Found (3.69s)
=== RUN   TestACL_Authority_Management
2016/03/28 05:54:14 [INFO] raft: Node at 127.0.0.1:15065 [Follower] entering Follower state
2016/03/28 05:54:14 [INFO] serf: EventMemberJoin: Node 15064 127.0.0.1
2016/03/28 05:54:14 [INFO] consul: adding LAN server Node 15064 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/03/28 05:54:14 [INFO] serf: EventMemberJoin: Node 15064.dc1 127.0.0.1
2016/03/28 05:54:14 [INFO] consul: adding WAN server Node 15064.dc1 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/03/28 05:54:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:14 [INFO] raft: Node at 127.0.0.1:15065 [Candidate] entering Candidate state
2016/03/28 05:54:15 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:15 [DEBUG] raft: Vote granted from 127.0.0.1:15065. Tally: 1
2016/03/28 05:54:15 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:15 [INFO] raft: Node at 127.0.0.1:15065 [Leader] entering Leader state
2016/03/28 05:54:15 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:15 [INFO] consul: New leader elected: Node 15064
2016/03/28 05:54:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:15 [DEBUG] raft: Node 127.0.0.1:15065 updated peer set (2): [127.0.0.1:15065]
2016/03/28 05:54:15 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:16 [INFO] consul: member 'Node 15064' joined, marking health alive
2016/03/28 05:54:16 [INFO] consul: shutting down server
2016/03/28 05:54:16 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:16 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:16 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:54:16 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Authority_Management (2.96s)
=== RUN   TestACL_NonAuthority_NotFound
2016/03/28 05:54:17 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/03/28 05:54:17 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/03/28 05:54:17 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/28 05:54:17 [INFO] serf: EventMemberJoin: Node 15068.dc1 127.0.0.1
2016/03/28 05:54:17 [INFO] consul: adding WAN server Node 15068.dc1 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/28 05:54:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:17 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:18 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:18 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:18 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:18 [INFO] raft: Node at 127.0.0.1:15069 [Leader] entering Leader state
2016/03/28 05:54:18 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:18 [INFO] consul: New leader elected: Node 15068
2016/03/28 05:54:18 [INFO] raft: Node at 127.0.0.1:15073 [Follower] entering Follower state
2016/03/28 05:54:18 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/03/28 05:54:18 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/28 05:54:18 [INFO] serf: EventMemberJoin: Node 15072.dc1 127.0.0.1
2016/03/28 05:54:18 [INFO] consul: adding WAN server Node 15072.dc1 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/28 05:54:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15070
2016/03/28 05:54:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:42710
2016/03/28 05:54:18 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/03/28 05:54:18 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/28 05:54:18 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/03/28 05:54:18 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/28 05:54:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [DEBUG] serf: messageJoinType: Node 15072
2016/03/28 05:54:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:18 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15069]
2016/03/28 05:54:19 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:19 [DEBUG] memberlist: Potential blocking operation. Last command took 13.542ms
2016/03/28 05:54:20 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069]
2016/03/28 05:54:20 [INFO] raft: Added peer 127.0.0.1:15073, starting replication
2016/03/28 05:54:20 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60371
2016/03/28 05:54:20 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60373
2016/03/28 05:54:20 [DEBUG] memberlist: Potential blocking operation. Last command took 11.646666ms
2016/03/28 05:54:20 [DEBUG] raft: Failed to contact 127.0.0.1:15073 in 353.747666ms
2016/03/28 05:54:20 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:54:20 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:54:20 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/03/28 05:54:20 [INFO] consul: cluster leadership lost
2016/03/28 05:54:20 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:54:20 [ERR] consul: failed to reconcile member: {Node 15072 127.0.0.1 15074 map[vsn_max:3 build: port:15073 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:54:20 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:54:20 [WARN] raft: AppendEntries to 127.0.0.1:15073 rejected, sending older logs (next: 1)
2016/03/28 05:54:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:20 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:21 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60376
2016/03/28 05:54:21 [DEBUG] raft: Node 127.0.0.1:15073 updated peer set (2): [127.0.0.1:15069]
2016/03/28 05:54:21 [INFO] raft: pipelining replication to peer 127.0.0.1:15073
2016/03/28 05:54:21 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15073
2016/03/28 05:54:21 [DEBUG] memberlist: Potential blocking operation. Last command took 11.24ms
2016/03/28 05:54:21 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:21 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:21 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:21 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:22 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:22 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:22 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:22 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:23 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:23 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:23 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:23 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:24 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:24 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:24 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:25 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:25 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:25 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:25 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:25 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:25 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:26 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:26 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:26 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:26 [DEBUG] memberlist: Potential blocking operation. Last command took 10.221666ms
2016/03/28 05:54:27 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:27 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:27 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:28 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:28 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:28 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:28 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:29 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:29 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:29 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:29 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:29 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:29 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:29 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:29 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:30 [INFO] consul: shutting down server
2016/03/28 05:54:30 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:30 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/28 05:54:30 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/28 05:54:30 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:30 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/28 05:54:30 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/28 05:54:30 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:30 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/03/28 05:54:30 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:30 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15073: EOF
2016/03/28 05:54:30 [INFO] memberlist: Marking Node 15072 as failed, suspect timeout reached
2016/03/28 05:54:30 [INFO] serf: EventMemberFailed: Node 15072 127.0.0.1
2016/03/28 05:54:30 [INFO] consul: removing LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/28 05:54:30 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:30 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/28 05:54:31 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/28 05:54:31 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/28 05:54:31 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:31 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15073: EOF
2016/03/28 05:54:31 [INFO] consul: shutting down server
2016/03/28 05:54:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:31 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_NonAuthority_NotFound (14.85s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestACL_NonAuthority_Found
2016/03/28 05:54:32 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/03/28 05:54:32 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/03/28 05:54:32 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/28 05:54:32 [INFO] serf: EventMemberJoin: Node 15076.dc1 127.0.0.1
2016/03/28 05:54:32 [INFO] consul: adding WAN server Node 15076.dc1 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/28 05:54:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:32 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/03/28 05:54:33 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:33 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/03/28 05:54:33 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:33 [INFO] raft: Node at 127.0.0.1:15077 [Leader] entering Leader state
2016/03/28 05:54:33 [INFO] raft: Node at 127.0.0.1:15081 [Follower] entering Follower state
2016/03/28 05:54:33 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:33 [INFO] consul: New leader elected: Node 15076
2016/03/28 05:54:33 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/03/28 05:54:33 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/28 05:54:33 [INFO] serf: EventMemberJoin: Node 15080.dc1 127.0.0.1
2016/03/28 05:54:33 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15078
2016/03/28 05:54:33 [INFO] consul: adding WAN server Node 15080.dc1 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/28 05:54:33 [DEBUG] memberlist: TCP connection from=127.0.0.1:57312
2016/03/28 05:54:33 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/03/28 05:54:33 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/03/28 05:54:33 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/28 05:54:33 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/28 05:54:33 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:54:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:33 [DEBUG] serf: messageJoinType: Node 15080
2016/03/28 05:54:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:34 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15077]
2016/03/28 05:54:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:35 [INFO] consul: member 'Node 15076' joined, marking health alive
2016/03/28 05:54:35 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15081 127.0.0.1:15077]
2016/03/28 05:54:35 [INFO] raft: Added peer 127.0.0.1:15081, starting replication
2016/03/28 05:54:35 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:32779
2016/03/28 05:54:35 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:32780
2016/03/28 05:54:35 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/28 05:54:35 [DEBUG] raft: Failed to contact 127.0.0.1:15081 in 186.176ms
2016/03/28 05:54:35 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:54:35 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/03/28 05:54:35 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:54:35 [ERR] consul: failed to reconcile member: {Node 15080 127.0.0.1 15082 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15081] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:54:35 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:54:35 [INFO] consul: cluster leadership lost
2016/03/28 05:54:35 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:54:35 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/28 05:54:35 [WARN] raft: AppendEntries to 127.0.0.1:15081 rejected, sending older logs (next: 1)
2016/03/28 05:54:35 [INFO] consul: shutting down server
2016/03/28 05:54:35 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:35 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/03/28 05:54:35 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:35 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/03/28 05:54:35 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:35 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15081: EOF
2016/03/28 05:54:35 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/03/28 05:54:35 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:35 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15081: EOF
2016/03/28 05:54:35 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/03/28 05:54:35 [DEBUG] raft: Node 127.0.0.1:15081 updated peer set (2): [127.0.0.1:15077]
2016/03/28 05:54:36 [INFO] consul: shutting down server
2016/03/28 05:54:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:36 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/03/28 05:54:36 [INFO] memberlist: Marking Node 15080 as failed, suspect timeout reached
2016/03/28 05:54:36 [INFO] serf: EventMemberFailed: Node 15080 127.0.0.1
2016/03/28 05:54:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:36 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:45 [ERR] raft: Failed to heartbeat to 127.0.0.1:15081: read tcp 127.0.0.1:32782->127.0.0.1:15081: i/o timeout
2016/03/28 05:54:45 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15081: read tcp 127.0.0.1:32783->127.0.0.1:15081: i/o timeout
--- FAIL: TestACL_NonAuthority_Found (14.23s)
	acl_test.go:297: err: node is not the leader
=== RUN   TestACL_NonAuthority_Management
2016/03/28 05:54:46 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/03/28 05:54:46 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/03/28 05:54:46 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/28 05:54:46 [INFO] serf: EventMemberJoin: Node 15084.dc1 127.0.0.1
2016/03/28 05:54:46 [INFO] consul: adding WAN server Node 15084.dc1 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/28 05:54:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:46 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/03/28 05:54:47 [INFO] raft: Node at 127.0.0.1:15089 [Follower] entering Follower state
2016/03/28 05:54:47 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/03/28 05:54:47 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/28 05:54:47 [INFO] serf: EventMemberJoin: Node 15088.dc1 127.0.0.1
2016/03/28 05:54:47 [INFO] consul: adding WAN server Node 15088.dc1 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/28 05:54:47 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15086
2016/03/28 05:54:47 [DEBUG] memberlist: TCP connection from=127.0.0.1:60311
2016/03/28 05:54:47 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/03/28 05:54:47 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/28 05:54:47 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/03/28 05:54:47 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:47 [DEBUG] raft: Vote granted from 127.0.0.1:15085. Tally: 1
2016/03/28 05:54:47 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:47 [INFO] raft: Node at 127.0.0.1:15085 [Leader] entering Leader state
2016/03/28 05:54:47 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:47 [INFO] consul: New leader elected: Node 15084
2016/03/28 05:54:47 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [INFO] consul: New leader elected: Node 15084
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] serf: messageJoinType: Node 15088
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:47 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15085]
2016/03/28 05:54:47 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:48 [INFO] consul: member 'Node 15084' joined, marking health alive
2016/03/28 05:54:48 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15089 127.0.0.1:15085]
2016/03/28 05:54:48 [INFO] raft: Added peer 127.0.0.1:15089, starting replication
2016/03/28 05:54:48 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:35570
2016/03/28 05:54:48 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:35571
2016/03/28 05:54:48 [ERR] consul.acl: Failed to get policy for 'foobar': No cluster leader
2016/03/28 05:54:48 [INFO] consul: shutting down server
2016/03/28 05:54:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:48 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/03/28 05:54:48 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:48 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/28 05:54:48 [DEBUG] raft: Failed to contact 127.0.0.1:15089 in 230.670333ms
2016/03/28 05:54:48 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:54:48 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/03/28 05:54:48 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:54:48 [INFO] consul: cluster leadership lost
2016/03/28 05:54:48 [ERR] consul: failed to reconcile member: {Node 15088 127.0.0.1 15090 map[vsn_min:1 vsn_max:3 build: port:15089 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:54:48 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:54:48 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15089: EOF
2016/03/28 05:54:48 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/03/28 05:54:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:48 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/03/28 05:54:48 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/03/28 05:54:48 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/03/28 05:54:48 [INFO] memberlist: Marking Node 15088 as failed, suspect timeout reached
2016/03/28 05:54:48 [INFO] serf: EventMemberFailed: Node 15088 127.0.0.1
2016/03/28 05:54:48 [INFO] consul: removing LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/28 05:54:48 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:48 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: EOF
2016/03/28 05:54:48 [INFO] consul: shutting down server
2016/03/28 05:54:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:48 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/03/28 05:54:48 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/03/28 05:54:48 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/03/28 05:54:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:49 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_NonAuthority_Management (3.53s)
	acl_test.go:379: unexpected failed read
=== RUN   TestACL_DownPolicy_Deny
2016/03/28 05:54:49 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/03/28 05:54:49 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/03/28 05:54:49 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/28 05:54:49 [INFO] serf: EventMemberJoin: Node 15092.dc1 127.0.0.1
2016/03/28 05:54:49 [INFO] consul: adding WAN server Node 15092.dc1 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/28 05:54:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:49 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/28 05:54:50 [INFO] raft: Node at 127.0.0.1:15097 [Follower] entering Follower state
2016/03/28 05:54:50 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/03/28 05:54:50 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/28 05:54:50 [INFO] serf: EventMemberJoin: Node 15096.dc1 127.0.0.1
2016/03/28 05:54:50 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15094
2016/03/28 05:54:50 [DEBUG] memberlist: TCP connection from=127.0.0.1:58213
2016/03/28 05:54:50 [INFO] consul: adding WAN server Node 15096.dc1 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/28 05:54:50 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/03/28 05:54:50 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/03/28 05:54:50 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/28 05:54:50 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/28 05:54:50 [DEBUG] raft: Votes needed: 1
2016/03/28 05:54:50 [DEBUG] raft: Vote granted from 127.0.0.1:15093. Tally: 1
2016/03/28 05:54:50 [INFO] raft: Election won. Tally: 1
2016/03/28 05:54:50 [INFO] raft: Node at 127.0.0.1:15093 [Leader] entering Leader state
2016/03/28 05:54:50 [INFO] consul: cluster leadership acquired
2016/03/28 05:54:50 [INFO] consul: New leader elected: Node 15092
2016/03/28 05:54:50 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [INFO] consul: New leader elected: Node 15092
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:54:50 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15093]
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageJoinType: Node 15096
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:54:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:54:51 [INFO] consul: member 'Node 15092' joined, marking health alive
2016/03/28 05:54:51 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15097 127.0.0.1:15093]
2016/03/28 05:54:51 [INFO] raft: Added peer 127.0.0.1:15097, starting replication
2016/03/28 05:54:51 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:52357
2016/03/28 05:54:51 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:52358
2016/03/28 05:54:51 [DEBUG] raft: Failed to contact 127.0.0.1:15097 in 278.409667ms
2016/03/28 05:54:51 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/28 05:54:51 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:54:51 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/03/28 05:54:51 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:54:51 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/28 05:54:51 [WARN] raft: AppendEntries to 127.0.0.1:15097 rejected, sending older logs (next: 1)
2016/03/28 05:54:51 [INFO] consul: shutting down server
2016/03/28 05:54:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:51 [ERR] consul: failed to reconcile member: {Node 15096 127.0.0.1 15098 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15097 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:54:51 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:54:51 [INFO] consul: cluster leadership lost
2016/03/28 05:54:51 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:54:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:54:51 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/28 05:54:51 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/03/28 05:54:51 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/03/28 05:54:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:52 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/03/28 05:54:52 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/03/28 05:54:52 [INFO] memberlist: Marking Node 15096 as failed, suspect timeout reached
2016/03/28 05:54:52 [INFO] serf: EventMemberFailed: Node 15096 127.0.0.1
2016/03/28 05:54:52 [INFO] consul: removing LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/28 05:54:52 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:52 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15097: EOF
2016/03/28 05:54:52 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:54:52 [ERR] raft: Failed to heartbeat to 127.0.0.1:15097: EOF
2016/03/28 05:54:52 [DEBUG] raft: Node 127.0.0.1:15097 updated peer set (2): [127.0.0.1:15093]
2016/03/28 05:54:52 [INFO] consul: shutting down server
2016/03/28 05:54:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:54:52 [DEBUG] raft: Votes needed: 2
2016/03/28 05:54:52 [DEBUG] raft: Vote granted from 127.0.0.1:15093. Tally: 1
2016/03/28 05:54:52 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:54:52 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/28 05:54:52 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15097: dial tcp 127.0.0.1:15097: getsockopt: connection refused
2016/03/28 05:54:53 [DEBUG] raft: Votes needed: 2
2016/03/28 05:55:02 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15097: read tcp 127.0.0.1:52361->127.0.0.1:15097: i/o timeout
--- FAIL: TestACL_DownPolicy_Deny (12.86s)
	acl_test.go:430: err: node is not the leader
=== RUN   TestACL_DownPolicy_Allow
2016/03/28 05:55:03 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15100.dc1 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding WAN server Node 15100.dc1 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/28 05:55:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:03 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/03/28 05:55:03 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:03 [DEBUG] raft: Vote granted from 127.0.0.1:15101. Tally: 1
2016/03/28 05:55:03 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:03 [INFO] raft: Node at 127.0.0.1:15101 [Leader] entering Leader state
2016/03/28 05:55:03 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:03 [INFO] raft: Node at 127.0.0.1:15105 [Follower] entering Follower state
2016/03/28 05:55:03 [INFO] consul: New leader elected: Node 15100
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15104.dc1 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding WAN server Node 15104.dc1 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/28 05:55:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:59745
2016/03/28 05:55:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15102
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/28 05:55:03 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/03/28 05:55:03 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/28 05:55:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:03 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:03 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15101]
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:03 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:04 [DEBUG] serf: messageJoinType: Node 15104
2016/03/28 05:55:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:04 [INFO] consul: member 'Node 15100' joined, marking health alive
2016/03/28 05:55:05 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15105 127.0.0.1:15101]
2016/03/28 05:55:05 [INFO] raft: Added peer 127.0.0.1:15105, starting replication
2016/03/28 05:55:05 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:46496
2016/03/28 05:55:05 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:46497
2016/03/28 05:55:05 [DEBUG] raft: Failed to contact 127.0.0.1:15105 in 141.921667ms
2016/03/28 05:55:05 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:55:05 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/28 05:55:05 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/03/28 05:55:05 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/28 05:55:05 [INFO] consul: shutting down server
2016/03/28 05:55:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:05 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:55:05 [WARN] raft: AppendEntries to 127.0.0.1:15105 rejected, sending older logs (next: 1)
2016/03/28 05:55:05 [INFO] consul: cluster leadership lost
2016/03/28 05:55:05 [ERR] consul: failed to reconcile member: {Node 15104 127.0.0.1 15106 map[build: port:15105 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:55:05 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:55:05 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:55:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:05 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/03/28 05:55:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:05 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/03/28 05:55:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:55:05 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15105: EOF
2016/03/28 05:55:05 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/03/28 05:55:05 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/03/28 05:55:05 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/03/28 05:55:05 [INFO] memberlist: Marking Node 15104 as failed, suspect timeout reached
2016/03/28 05:55:05 [INFO] serf: EventMemberFailed: Node 15104 127.0.0.1
2016/03/28 05:55:05 [INFO] consul: removing LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/28 05:55:05 [DEBUG] raft: Node 127.0.0.1:15105 updated peer set (2): [127.0.0.1:15101]
2016/03/28 05:55:05 [INFO] consul: shutting down server
2016/03/28 05:55:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:05 [DEBUG] raft: Votes needed: 2
2016/03/28 05:55:15 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15105: read tcp 127.0.0.1:46499->127.0.0.1:15105: i/o timeout
--- FAIL: TestACL_DownPolicy_Allow (13.28s)
	acl_test.go:504: err: node is not the leader
=== RUN   TestACL_DownPolicy_ExtendCache
2016/03/28 05:55:16 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/03/28 05:55:16 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/03/28 05:55:16 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/28 05:55:16 [INFO] serf: EventMemberJoin: Node 15108.dc1 127.0.0.1
2016/03/28 05:55:16 [INFO] consul: adding WAN server Node 15108.dc1 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/28 05:55:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:16 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/03/28 05:55:17 [INFO] raft: Node at 127.0.0.1:15113 [Follower] entering Follower state
2016/03/28 05:55:17 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/03/28 05:55:17 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/28 05:55:17 [INFO] serf: EventMemberJoin: Node 15112.dc1 127.0.0.1
2016/03/28 05:55:17 [INFO] consul: adding WAN server Node 15112.dc1 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/28 05:55:17 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15110
2016/03/28 05:55:17 [DEBUG] memberlist: TCP connection from=127.0.0.1:33532
2016/03/28 05:55:17 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/03/28 05:55:17 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/03/28 05:55:17 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/28 05:55:17 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/28 05:55:17 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:17 [DEBUG] raft: Vote granted from 127.0.0.1:15109. Tally: 1
2016/03/28 05:55:17 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:17 [INFO] raft: Node at 127.0.0.1:15109 [Leader] entering Leader state
2016/03/28 05:55:17 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:17 [INFO] consul: New leader elected: Node 15108
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [INFO] consul: New leader elected: Node 15108
2016/03/28 05:55:17 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:17 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15109]
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] serf: messageJoinType: Node 15112
2016/03/28 05:55:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:17 [INFO] consul: member 'Node 15108' joined, marking health alive
2016/03/28 05:55:17 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15113 127.0.0.1:15109]
2016/03/28 05:55:17 [INFO] raft: Added peer 127.0.0.1:15113, starting replication
2016/03/28 05:55:17 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:38545
2016/03/28 05:55:17 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:38546
2016/03/28 05:55:18 [DEBUG] raft: Failed to contact 127.0.0.1:15113 in 175.378667ms
2016/03/28 05:55:18 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:55:18 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/28 05:55:18 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/03/28 05:55:18 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:55:18 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/28 05:55:18 [ERR] consul: failed to reconcile member: {Node 15112 127.0.0.1 15114 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15113 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:55:18 [INFO] consul: shutting down server
2016/03/28 05:55:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:18 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:55:18 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:55:18 [INFO] consul: cluster leadership lost
2016/03/28 05:55:18 [WARN] raft: AppendEntries to 127.0.0.1:15113 rejected, sending older logs (next: 1)
2016/03/28 05:55:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:18 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/03/28 05:55:18 [DEBUG] memberlist: Failed UDP ping: Node 15112 (timeout reached)
2016/03/28 05:55:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:18 [INFO] memberlist: Suspect Node 15112 has failed, no acks received
2016/03/28 05:55:18 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:55:18 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:55:18 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15113: EOF
2016/03/28 05:55:18 [ERR] raft: Failed to heartbeat to 127.0.0.1:15113: EOF
2016/03/28 05:55:18 [DEBUG] memberlist: Failed UDP ping: Node 15112 (timeout reached)
2016/03/28 05:55:18 [INFO] memberlist: Suspect Node 15112 has failed, no acks received
2016/03/28 05:55:18 [INFO] memberlist: Marking Node 15112 as failed, suspect timeout reached
2016/03/28 05:55:18 [INFO] serf: EventMemberFailed: Node 15112 127.0.0.1
2016/03/28 05:55:18 [INFO] consul: removing LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/28 05:55:18 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15109]
2016/03/28 05:55:18 [INFO] consul: shutting down server
2016/03/28 05:55:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:18 [DEBUG] raft: Votes needed: 2
2016/03/28 05:55:28 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15113: read tcp 127.0.0.1:38549->127.0.0.1:15113: i/o timeout
--- FAIL: TestACL_DownPolicy_ExtendCache (12.92s)
	acl_test.go:580: err: node is not the leader
=== RUN   TestACL_MultiDC_Found
2016/03/28 05:55:28 [INFO] raft: Node at 127.0.0.1:15117 [Follower] entering Follower state
2016/03/28 05:55:28 [INFO] serf: EventMemberJoin: Node 15116 127.0.0.1
2016/03/28 05:55:28 [INFO] consul: adding LAN server Node 15116 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/28 05:55:28 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/03/28 05:55:28 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/28 05:55:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:28 [INFO] raft: Node at 127.0.0.1:15117 [Candidate] entering Candidate state
2016/03/28 05:55:29 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:29 [DEBUG] raft: Vote granted from 127.0.0.1:15117. Tally: 1
2016/03/28 05:55:29 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:29 [INFO] raft: Node at 127.0.0.1:15117 [Leader] entering Leader state
2016/03/28 05:55:29 [INFO] raft: Node at 127.0.0.1:15121 [Follower] entering Follower state
2016/03/28 05:55:29 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:29 [INFO] consul: New leader elected: Node 15116
2016/03/28 05:55:29 [INFO] serf: EventMemberJoin: Node 15120 127.0.0.1
2016/03/28 05:55:29 [INFO] consul: adding LAN server Node 15120 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/28 05:55:29 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/03/28 05:55:29 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/28 05:55:29 [DEBUG] memberlist: TCP connection from=127.0.0.1:46010
2016/03/28 05:55:29 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15119
2016/03/28 05:55:29 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/03/28 05:55:29 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/03/28 05:55:29 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/28 05:55:29 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/28 05:55:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:29 [INFO] raft: Node at 127.0.0.1:15121 [Candidate] entering Candidate state
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:29 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/28 05:55:29 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15117]
2016/03/28 05:55:29 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:30 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:30 [DEBUG] raft: Vote granted from 127.0.0.1:15121. Tally: 1
2016/03/28 05:55:30 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:30 [INFO] raft: Node at 127.0.0.1:15121 [Leader] entering Leader state
2016/03/28 05:55:30 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:30 [INFO] consul: New leader elected: Node 15120
2016/03/28 05:55:30 [INFO] consul: member 'Node 15116' joined, marking health alive
2016/03/28 05:55:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:30 [DEBUG] raft: Node 127.0.0.1:15121 updated peer set (2): [127.0.0.1:15121]
2016/03/28 05:55:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:30 [INFO] consul: member 'Node 15120' joined, marking health alive
2016/03/28 05:55:31 [INFO] consul: shutting down server
2016/03/28 05:55:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:31 [DEBUG] memberlist: Failed UDP ping: Node 15120.dc2 (timeout reached)
2016/03/28 05:55:32 [INFO] memberlist: Suspect Node 15120.dc2 has failed, no acks received
2016/03/28 05:55:32 [DEBUG] memberlist: Failed UDP ping: Node 15120.dc2 (timeout reached)
2016/03/28 05:55:32 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:55:32 [INFO] consul: shutting down server
2016/03/28 05:55:32 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:32 [INFO] memberlist: Suspect Node 15120.dc2 has failed, no acks received
2016/03/28 05:55:32 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_MultiDC_Found (3.78s)
=== RUN   TestACL_filterHealthChecks
2016/03/28 05:55:32 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterHealthChecks (0.00s)
=== RUN   TestACL_filterServices
2016/03/28 05:55:32 [DEBUG] consul: dropping service "service1" from result due to ACLs
2016/03/28 05:55:32 [DEBUG] consul: dropping service "service2" from result due to ACLs
--- PASS: TestACL_filterServices (0.00s)
=== RUN   TestACL_filterServiceNodes
2016/03/28 05:55:32 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterServiceNodes (0.00s)
=== RUN   TestACL_filterNodeServices
2016/03/28 05:55:32 [DEBUG] consul: dropping service "foo" from result due to ACLs
--- PASS: TestACL_filterNodeServices (0.00s)
=== RUN   TestACL_filterCheckServiceNodes
2016/03/28 05:55:32 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterCheckServiceNodes (0.00s)
=== RUN   TestACL_filterNodeDump
2016/03/28 05:55:32 [DEBUG] consul: dropping service "foo" from result due to ACLs
2016/03/28 05:55:32 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterNodeDump (0.00s)
=== RUN   TestACL_unhandledFilterType
2016/03/28 05:55:32 [INFO] memberlist: Marking Node 15120.dc2 as failed, suspect timeout reached
2016/03/28 05:55:32 [INFO] serf: EventMemberFailed: Node 15120.dc2 127.0.0.1
2016/03/28 05:55:32 [INFO] raft: Node at 127.0.0.1:15125 [Follower] entering Follower state
2016/03/28 05:55:32 [INFO] serf: EventMemberJoin: Node 15124 127.0.0.1
2016/03/28 05:55:32 [INFO] consul: adding LAN server Node 15124 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/03/28 05:55:32 [INFO] serf: EventMemberJoin: Node 15124.dc1 127.0.0.1
2016/03/28 05:55:32 [INFO] consul: adding WAN server Node 15124.dc1 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/03/28 05:55:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:32 [INFO] raft: Node at 127.0.0.1:15125 [Candidate] entering Candidate state
2016/03/28 05:55:33 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:33 [DEBUG] raft: Vote granted from 127.0.0.1:15125. Tally: 1
2016/03/28 05:55:33 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:33 [INFO] raft: Node at 127.0.0.1:15125 [Leader] entering Leader state
2016/03/28 05:55:33 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:33 [INFO] consul: New leader elected: Node 15124
2016/03/28 05:55:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:33 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15125]
2016/03/28 05:55:33 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:33 [INFO] consul: member 'Node 15124' joined, marking health alive
2016/03/28 05:55:34 [INFO] consul: shutting down server
2016/03/28 05:55:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:35 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_unhandledFilterType (3.18s)
=== RUN   TestCatalogRegister
2016/03/28 05:55:35 [INFO] raft: Node at 127.0.0.1:15129 [Follower] entering Follower state
2016/03/28 05:55:35 [INFO] serf: EventMemberJoin: Node 15128 127.0.0.1
2016/03/28 05:55:35 [INFO] consul: adding LAN server Node 15128 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/03/28 05:55:35 [INFO] serf: EventMemberJoin: Node 15128.dc1 127.0.0.1
2016/03/28 05:55:35 [INFO] consul: adding WAN server Node 15128.dc1 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/03/28 05:55:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:35 [INFO] raft: Node at 127.0.0.1:15129 [Candidate] entering Candidate state
2016/03/28 05:55:36 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:36 [DEBUG] raft: Vote granted from 127.0.0.1:15129. Tally: 1
2016/03/28 05:55:36 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:36 [INFO] raft: Node at 127.0.0.1:15129 [Leader] entering Leader state
2016/03/28 05:55:36 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:36 [INFO] consul: New leader elected: Node 15128
2016/03/28 05:55:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:36 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15129]
2016/03/28 05:55:36 [DEBUG] consul: reset tombstone GC to index 3
2016/03/28 05:55:36 [INFO] consul: member 'Node 15128' joined, marking health alive
2016/03/28 05:55:36 [INFO] consul: shutting down server
2016/03/28 05:55:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:36 [ERR] consul.catalog: Register failed: raft is already shutdown
2016/03/28 05:55:36 [ERR] consul: failed to reconcile member: {Node 15128 127.0.0.1 15130 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15129 bootstrap:1 role:consul dc:dc1] alive 1 3 2 2 4 4}: raft is already shutdown
2016/03/28 05:55:36 [ERR] consul: failed to reconcile: raft is already shutdown
2016/03/28 05:55:36 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister (1.52s)
=== RUN   TestCatalogRegister_ACLDeny
2016/03/28 05:55:37 [INFO] raft: Node at 127.0.0.1:15133 [Follower] entering Follower state
2016/03/28 05:55:37 [INFO] serf: EventMemberJoin: Node 15132 127.0.0.1
2016/03/28 05:55:37 [INFO] consul: adding LAN server Node 15132 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/03/28 05:55:37 [INFO] serf: EventMemberJoin: Node 15132.dc1 127.0.0.1
2016/03/28 05:55:37 [INFO] consul: adding WAN server Node 15132.dc1 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/03/28 05:55:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:37 [INFO] raft: Node at 127.0.0.1:15133 [Candidate] entering Candidate state
2016/03/28 05:55:37 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:37 [DEBUG] raft: Vote granted from 127.0.0.1:15133. Tally: 1
2016/03/28 05:55:37 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:37 [INFO] raft: Node at 127.0.0.1:15133 [Leader] entering Leader state
2016/03/28 05:55:37 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:37 [INFO] consul: New leader elected: Node 15132
2016/03/28 05:55:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:38 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15133]
2016/03/28 05:55:38 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:38 [INFO] consul: member 'Node 15132' joined, marking health alive
2016/03/28 05:55:39 [WARN] consul.catalog: Register of service 'db' on 'foo' denied due to ACLs
2016/03/28 05:55:39 [INFO] consul: shutting down server
2016/03/28 05:55:39 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:39 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:39 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:55:39 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ACLDeny (2.68s)
=== RUN   TestCatalogRegister_ForwardLeader
2016/03/28 05:55:40 [INFO] raft: Node at 127.0.0.1:15137 [Follower] entering Follower state
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15136.dc1 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding WAN server Node 15136.dc1 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/28 05:55:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:40 [INFO] raft: Node at 127.0.0.1:15137 [Candidate] entering Candidate state
2016/03/28 05:55:40 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:40 [DEBUG] raft: Vote granted from 127.0.0.1:15137. Tally: 1
2016/03/28 05:55:40 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:40 [INFO] raft: Node at 127.0.0.1:15137 [Leader] entering Leader state
2016/03/28 05:55:40 [INFO] raft: Node at 127.0.0.1:15141 [Follower] entering Follower state
2016/03/28 05:55:40 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:40 [INFO] consul: New leader elected: Node 15136
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15140.dc1 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding WAN server Node 15140.dc1 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/28 05:55:40 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15138
2016/03/28 05:55:40 [DEBUG] memberlist: TCP connection from=127.0.0.1:53097
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/28 05:55:40 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/03/28 05:55:40 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/28 05:55:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:41 [INFO] raft: Node at 127.0.0.1:15141 [Candidate] entering Candidate state
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageJoinType: Node 15140
2016/03/28 05:55:41 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15137]
2016/03/28 05:55:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:41 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/03/28 05:55:41 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:41 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/03/28 05:55:41 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:41 [DEBUG] raft: Vote granted from 127.0.0.1:15141. Tally: 1
2016/03/28 05:55:41 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:41 [INFO] raft: Node at 127.0.0.1:15141 [Leader] entering Leader state
2016/03/28 05:55:41 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:41 [INFO] consul: New leader elected: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [INFO] consul: New leader elected: Node 15140
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:41 [DEBUG] raft: Node 127.0.0.1:15141 updated peer set (2): [127.0.0.1:15141]
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:55:42 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:42 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:42 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:42 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/03/28 05:55:42 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:42 [ERR] consul: 'Node 15136' and 'Node 15140' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:42 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/03/28 05:55:42 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:43 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:43 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:43 [ERR] consul: 'Node 15136' and 'Node 15140' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:43 [INFO] consul: shutting down server
2016/03/28 05:55:43 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:44 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/03/28 05:55:44 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/03/28 05:55:44 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/03/28 05:55:44 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:44 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:44 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/03/28 05:55:44 [INFO] memberlist: Marking Node 15140 as failed, suspect timeout reached
2016/03/28 05:55:44 [INFO] serf: EventMemberFailed: Node 15140 127.0.0.1
2016/03/28 05:55:44 [INFO] consul: removing LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/28 05:55:44 [ERR] consul: 'Node 15136' and 'Node 15140' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 05:55:44 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 05:55:44 [INFO] consul: shutting down server
2016/03/28 05:55:44 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:44 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/03/28 05:55:44 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/03/28 05:55:44 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:44 [INFO] consul: member 'Node 15140' failed, marking health critical
2016/03/28 05:55:44 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/28 05:55:44 [ERR] consul: failed to reconcile member: {Node 15140 127.0.0.1 15142 map[bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15141] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:55:44 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:55:44 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ForwardLeader (5.04s)
=== RUN   TestCatalogRegister_ForwardDC
2016/03/28 05:55:45 [INFO] raft: Node at 127.0.0.1:15145 [Follower] entering Follower state
2016/03/28 05:55:45 [INFO] serf: EventMemberJoin: Node 15144 127.0.0.1
2016/03/28 05:55:45 [INFO] consul: adding LAN server Node 15144 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/28 05:55:45 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/03/28 05:55:45 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/28 05:55:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:45 [INFO] raft: Node at 127.0.0.1:15145 [Candidate] entering Candidate state
2016/03/28 05:55:46 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:46 [DEBUG] raft: Vote granted from 127.0.0.1:15145. Tally: 1
2016/03/28 05:55:46 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:46 [INFO] raft: Node at 127.0.0.1:15145 [Leader] entering Leader state
2016/03/28 05:55:46 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:46 [INFO] raft: Node at 127.0.0.1:15149 [Follower] entering Follower state
2016/03/28 05:55:46 [INFO] consul: New leader elected: Node 15144
2016/03/28 05:55:46 [INFO] serf: EventMemberJoin: Node 15148 127.0.0.1
2016/03/28 05:55:46 [INFO] consul: adding LAN server Node 15148 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/28 05:55:46 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/03/28 05:55:46 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15147
2016/03/28 05:55:46 [DEBUG] memberlist: TCP connection from=127.0.0.1:59762
2016/03/28 05:55:46 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/28 05:55:46 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/03/28 05:55:46 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/03/28 05:55:46 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/28 05:55:46 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/28 05:55:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:46 [INFO] raft: Node at 127.0.0.1:15149 [Candidate] entering Candidate state
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:46 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:46 [DEBUG] raft: Node 127.0.0.1:15145 updated peer set (2): [127.0.0.1:15145]
2016/03/28 05:55:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/28 05:55:47 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:47 [INFO] consul: member 'Node 15144' joined, marking health alive
2016/03/28 05:55:47 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:47 [DEBUG] raft: Vote granted from 127.0.0.1:15149. Tally: 1
2016/03/28 05:55:47 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:47 [INFO] raft: Node at 127.0.0.1:15149 [Leader] entering Leader state
2016/03/28 05:55:47 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:47 [INFO] consul: New leader elected: Node 15148
2016/03/28 05:55:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:47 [DEBUG] raft: Node 127.0.0.1:15149 updated peer set (2): [127.0.0.1:15149]
2016/03/28 05:55:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:48 [INFO] consul: member 'Node 15148' joined, marking health alive
2016/03/28 05:55:49 [INFO] consul: shutting down server
2016/03/28 05:55:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:49 [INFO] consul: shutting down server
2016/03/28 05:55:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:49 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogRegister_ForwardDC (5.34s)
=== RUN   TestCatalogDeregister
2016/03/28 05:55:50 [INFO] raft: Node at 127.0.0.1:15153 [Follower] entering Follower state
2016/03/28 05:55:50 [INFO] serf: EventMemberJoin: Node 15152 127.0.0.1
2016/03/28 05:55:50 [INFO] consul: adding LAN server Node 15152 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/03/28 05:55:50 [INFO] serf: EventMemberJoin: Node 15152.dc1 127.0.0.1
2016/03/28 05:55:50 [INFO] consul: adding WAN server Node 15152.dc1 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/03/28 05:55:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:50 [INFO] raft: Node at 127.0.0.1:15153 [Candidate] entering Candidate state
2016/03/28 05:55:50 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:50 [DEBUG] raft: Vote granted from 127.0.0.1:15153. Tally: 1
2016/03/28 05:55:50 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:50 [INFO] raft: Node at 127.0.0.1:15153 [Leader] entering Leader state
2016/03/28 05:55:50 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:50 [INFO] consul: New leader elected: Node 15152
2016/03/28 05:55:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:51 [DEBUG] raft: Node 127.0.0.1:15153 updated peer set (2): [127.0.0.1:15153]
2016/03/28 05:55:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:51 [INFO] consul: member 'Node 15152' joined, marking health alive
2016/03/28 05:55:51 [INFO] consul: shutting down server
2016/03/28 05:55:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:51 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogDeregister (2.01s)
=== RUN   TestCatalogListDatacenters
2016/03/28 05:55:52 [INFO] serf: EventMemberJoin: Node 15156 127.0.0.1
2016/03/28 05:55:52 [INFO] consul: adding LAN server Node 15156 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/28 05:55:52 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/03/28 05:55:52 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/28 05:55:52 [INFO] raft: Node at 127.0.0.1:15157 [Follower] entering Follower state
2016/03/28 05:55:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:52 [INFO] raft: Node at 127.0.0.1:15157 [Candidate] entering Candidate state
2016/03/28 05:55:53 [INFO] raft: Node at 127.0.0.1:15161 [Follower] entering Follower state
2016/03/28 05:55:53 [INFO] serf: EventMemberJoin: Node 15160 127.0.0.1
2016/03/28 05:55:53 [INFO] consul: adding LAN server Node 15160 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/28 05:55:53 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/03/28 05:55:53 [DEBUG] memberlist: TCP connection from=127.0.0.1:40069
2016/03/28 05:55:53 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/28 05:55:53 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15159
2016/03/28 05:55:53 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/03/28 05:55:53 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/28 05:55:53 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/03/28 05:55:53 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/28 05:55:53 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:53 [DEBUG] raft: Vote granted from 127.0.0.1:15157. Tally: 1
2016/03/28 05:55:53 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:53 [INFO] raft: Node at 127.0.0.1:15157 [Leader] entering Leader state
2016/03/28 05:55:53 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:53 [INFO] consul: New leader elected: Node 15156
2016/03/28 05:55:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:53 [INFO] raft: Node at 127.0.0.1:15161 [Candidate] entering Candidate state
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:53 [DEBUG] raft: Node 127.0.0.1:15157 updated peer set (2): [127.0.0.1:15157]
2016/03/28 05:55:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/28 05:55:53 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:53 [INFO] consul: member 'Node 15156' joined, marking health alive
2016/03/28 05:55:54 [INFO] consul: shutting down server
2016/03/28 05:55:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:54 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:54 [DEBUG] raft: Vote granted from 127.0.0.1:15161. Tally: 1
2016/03/28 05:55:54 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:54 [INFO] raft: Node at 127.0.0.1:15161 [Leader] entering Leader state
2016/03/28 05:55:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:54 [INFO] consul: shutting down server
2016/03/28 05:55:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:54 [DEBUG] memberlist: Failed UDP ping: Node 15160.dc2 (timeout reached)
2016/03/28 05:55:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:54 [INFO] memberlist: Suspect Node 15160.dc2 has failed, no acks received
--- PASS: TestCatalogListDatacenters (2.52s)
=== RUN   TestCatalogListDatacenters_DistanceSort
2016/03/28 05:55:54 [INFO] memberlist: Marking Node 15160.dc2 as failed, suspect timeout reached
2016/03/28 05:55:54 [INFO] serf: EventMemberFailed: Node 15160.dc2 127.0.0.1
2016/03/28 05:55:55 [INFO] raft: Node at 127.0.0.1:15165 [Follower] entering Follower state
2016/03/28 05:55:55 [INFO] serf: EventMemberJoin: Node 15164 127.0.0.1
2016/03/28 05:55:55 [INFO] consul: adding LAN server Node 15164 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/28 05:55:55 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/28 05:55:55 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/28 05:55:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:55 [INFO] raft: Node at 127.0.0.1:15165 [Candidate] entering Candidate state
2016/03/28 05:55:56 [INFO] raft: Node at 127.0.0.1:15169 [Follower] entering Follower state
2016/03/28 05:55:56 [INFO] serf: EventMemberJoin: Node 15168 127.0.0.1
2016/03/28 05:55:56 [INFO] consul: adding LAN server Node 15168 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/28 05:55:56 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/28 05:55:56 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/28 05:55:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:56 [INFO] raft: Node at 127.0.0.1:15169 [Candidate] entering Candidate state
2016/03/28 05:55:56 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:56 [DEBUG] raft: Vote granted from 127.0.0.1:15165. Tally: 1
2016/03/28 05:55:56 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:56 [INFO] raft: Node at 127.0.0.1:15165 [Leader] entering Leader state
2016/03/28 05:55:56 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:56 [INFO] consul: New leader elected: Node 15164
2016/03/28 05:55:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:57 [DEBUG] raft: Node 127.0.0.1:15165 updated peer set (2): [127.0.0.1:15165]
2016/03/28 05:55:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:57 [INFO] consul: member 'Node 15164' joined, marking health alive
2016/03/28 05:55:57 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:57 [DEBUG] raft: Vote granted from 127.0.0.1:15169. Tally: 1
2016/03/28 05:55:57 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:57 [INFO] raft: Node at 127.0.0.1:15169 [Leader] entering Leader state
2016/03/28 05:55:57 [INFO] raft: Node at 127.0.0.1:15173 [Follower] entering Follower state
2016/03/28 05:55:57 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:57 [INFO] consul: New leader elected: Node 15168
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15172 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding LAN server Node 15172 (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [DEBUG] memberlist: TCP connection from=127.0.0.1:54082
2016/03/28 05:55:57 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/28 05:55:57 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/03/28 05:55:57 [DEBUG] memberlist: TCP connection from=127.0.0.1:54083
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/28 05:55:57 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [INFO] consul: shutting down server
2016/03/28 05:55:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:57 [INFO] raft: Node at 127.0.0.1:15173 [Candidate] entering Candidate state
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/28 05:55:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/28 05:55:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/28 05:55:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/28 05:55:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:57 [DEBUG] raft: Node 127.0.0.1:15169 updated peer set (2): [127.0.0.1:15169]
2016/03/28 05:55:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/28 05:55:57 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/03/28 05:55:57 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: removing WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/03/28 05:55:57 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/03/28 05:55:57 [INFO] consul: removing WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/28 05:55:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/28 05:55:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/28 05:55:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/28 05:55:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:57 [INFO] consul: member 'Node 15168' joined, marking health alive
2016/03/28 05:55:58 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/28 05:55:58 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:58 [INFO] consul: shutting down server
2016/03/28 05:55:58 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:58 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:58 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 05:55:58 [INFO] consul: shutting down server
2016/03/28 05:55:58 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:58 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/03/28 05:55:58 [WARN] serf: Shutdown without a Leave
2016/03/28 05:55:58 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
2016/03/28 05:55:58 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestCatalogListDatacenters_DistanceSort (4.06s)
	catalog_endpoint_test.go:292: bad: [dc1 dc2 acdc]
=== RUN   TestCatalogListNodes
2016/03/28 05:55:58 [INFO] memberlist: Marking Node 15168.dc2 as failed, suspect timeout reached
2016/03/28 05:55:58 [INFO] serf: EventMemberFailed: Node 15168.dc2 127.0.0.1
2016/03/28 05:55:59 [INFO] raft: Node at 127.0.0.1:15177 [Follower] entering Follower state
2016/03/28 05:55:59 [INFO] serf: EventMemberJoin: Node 15176 127.0.0.1
2016/03/28 05:55:59 [INFO] consul: adding LAN server Node 15176 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/03/28 05:55:59 [INFO] serf: EventMemberJoin: Node 15176.dc1 127.0.0.1
2016/03/28 05:55:59 [INFO] consul: adding WAN server Node 15176.dc1 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/03/28 05:55:59 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:55:59 [INFO] raft: Node at 127.0.0.1:15177 [Candidate] entering Candidate state
2016/03/28 05:55:59 [DEBUG] raft: Votes needed: 1
2016/03/28 05:55:59 [DEBUG] raft: Vote granted from 127.0.0.1:15177. Tally: 1
2016/03/28 05:55:59 [INFO] raft: Election won. Tally: 1
2016/03/28 05:55:59 [INFO] raft: Node at 127.0.0.1:15177 [Leader] entering Leader state
2016/03/28 05:55:59 [INFO] consul: cluster leadership acquired
2016/03/28 05:55:59 [INFO] consul: New leader elected: Node 15176
2016/03/28 05:55:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:55:59 [DEBUG] raft: Node 127.0.0.1:15177 updated peer set (2): [127.0.0.1:15177]
2016/03/28 05:55:59 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:55:59 [INFO] consul: member 'Node 15176' joined, marking health alive
2016/03/28 05:56:00 [INFO] consul: shutting down server
2016/03/28 05:56:00 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:00 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:00 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListNodes (1.87s)
=== RUN   TestCatalogListNodes_StaleRaad
2016/03/28 05:56:01 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/03/28 05:56:01 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/03/28 05:56:01 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/28 05:56:01 [INFO] serf: EventMemberJoin: Node 15180.dc1 127.0.0.1
2016/03/28 05:56:01 [INFO] consul: adding WAN server Node 15180.dc1 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/28 05:56:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:01 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:02 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/28 05:56:02 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/03/28 05:56:02 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/28 05:56:02 [INFO] serf: EventMemberJoin: Node 15184.dc1 127.0.0.1
2016/03/28 05:56:02 [INFO] consul: adding WAN server Node 15184.dc1 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/28 05:56:02 [DEBUG] memberlist: TCP connection from=127.0.0.1:43997
2016/03/28 05:56:02 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15182
2016/03/28 05:56:02 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/03/28 05:56:02 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/03/28 05:56:02 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/28 05:56:02 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/28 05:56:02 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:02 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:02 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:02 [INFO] raft: Node at 127.0.0.1:15181 [Leader] entering Leader state
2016/03/28 05:56:02 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:02 [INFO] consul: New leader elected: Node 15180
2016/03/28 05:56:02 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [INFO] consul: New leader elected: Node 15180
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:02 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15181]
2016/03/28 05:56:02 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:02 [INFO] consul: member 'Node 15180' joined, marking health alive
2016/03/28 05:56:02 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15185 127.0.0.1:15181]
2016/03/28 05:56:02 [INFO] raft: Added peer 127.0.0.1:15185, starting replication
2016/03/28 05:56:02 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55863
2016/03/28 05:56:02 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55864
2016/03/28 05:56:03 [DEBUG] raft: Failed to contact 127.0.0.1:15185 in 174.635333ms
2016/03/28 05:56:03 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:56:03 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:56:03 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:56:03 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/03/28 05:56:03 [INFO] consul: cluster leadership lost
2016/03/28 05:56:03 [ERR] consul: failed to reconcile member: {Node 15184 127.0.0.1 15186 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15185] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:56:03 [WARN] raft: AppendEntries to 127.0.0.1:15185 rejected, sending older logs (next: 1)
2016/03/28 05:56:03 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:56:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:03 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:03 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55866
2016/03/28 05:56:03 [DEBUG] raft: Node 127.0.0.1:15185 updated peer set (2): [127.0.0.1:15181]
2016/03/28 05:56:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15185
2016/03/28 05:56:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15185
2016/03/28 05:56:03 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55867
2016/03/28 05:56:04 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:04 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:04 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:04 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:04 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55868
2016/03/28 05:56:04 [ERR] raft: peer 127.0.0.1:15185 has newer term, stopping replication
2016/03/28 05:56:04 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:04 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:05 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:05 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:06 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:06 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:06 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:06 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:06 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:06 [DEBUG] raft-net: 127.0.0.1:15181 accepted connection from: 127.0.0.1:46927
2016/03/28 05:56:07 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:07 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:07 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 05:56:07 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:07 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:07 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:07 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 05:56:07 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:07 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:07 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:07 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:07 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 05:56:07 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:07 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:07 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:08 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:08 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:08 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 05:56:08 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:08 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:08 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:08 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:08 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/28 05:56:09 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:09 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:09 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:09 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:09 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:09 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:09 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 05:56:09 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:10 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:10 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:10 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:10 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 05:56:10 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:10 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:10 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:10 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:10 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:10 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/28 05:56:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:10 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:11 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:11 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:11 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:11 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:11 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:11 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:11 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:11 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:11 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:11 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:12 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:12 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:12 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:56:12 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:12 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:12 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:12 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:12 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:56:12 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:12 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:12 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:12 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:12 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 05:56:12 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:12 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:12 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:12 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/03/28 05:56:12 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 05:56:13 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:13 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/28 05:56:13 [INFO] consul: shutting down server
2016/03/28 05:56:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:13 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:13 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/28 05:56:13 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/03/28 05:56:13 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:13 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/28 05:56:13 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/03/28 05:56:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:13 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/03/28 05:56:13 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/03/28 05:56:13 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/03/28 05:56:13 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:13 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:13 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/03/28 05:56:13 [INFO] consul: shutting down server
2016/03/28 05:56:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:14 [INFO] memberlist: Marking Node 15184 as failed, suspect timeout reached
2016/03/28 05:56:14 [INFO] serf: EventMemberFailed: Node 15184 127.0.0.1
2016/03/28 05:56:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:14 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:14 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/03/28 05:56:14 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_StaleRaad (14.18s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_ConsistentRead_Fail
2016/03/28 05:56:15 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/03/28 05:56:15 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15188.dc1 127.0.0.1
2016/03/28 05:56:15 [INFO] consul: adding WAN server Node 15188.dc1 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/28 05:56:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:15 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:15 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/03/28 05:56:15 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15192.dc1 127.0.0.1
2016/03/28 05:56:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:40484
2016/03/28 05:56:15 [INFO] consul: adding WAN server Node 15192.dc1 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/28 05:56:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15190
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/03/28 05:56:15 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/28 05:56:15 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/03/28 05:56:15 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/28 05:56:15 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:56:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:15 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:15 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:15 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:15 [INFO] raft: Node at 127.0.0.1:15189 [Leader] entering Leader state
2016/03/28 05:56:15 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:15 [INFO] consul: New leader elected: Node 15188
2016/03/28 05:56:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:15 [INFO] consul: New leader elected: Node 15188
2016/03/28 05:56:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:16 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageJoinType: Node 15192
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:16 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15189]
2016/03/28 05:56:16 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:16 [INFO] consul: member 'Node 15188' joined, marking health alive
2016/03/28 05:56:16 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15193 127.0.0.1:15189]
2016/03/28 05:56:16 [INFO] raft: Added peer 127.0.0.1:15193, starting replication
2016/03/28 05:56:16 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:36712
2016/03/28 05:56:16 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:36713
2016/03/28 05:56:16 [DEBUG] raft: Failed to contact 127.0.0.1:15193 in 214.592667ms
2016/03/28 05:56:16 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:56:16 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:56:16 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/03/28 05:56:16 [INFO] consul: cluster leadership lost
2016/03/28 05:56:16 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:56:16 [WARN] raft: AppendEntries to 127.0.0.1:15193 rejected, sending older logs (next: 1)
2016/03/28 05:56:16 [ERR] consul: failed to reconcile member: {Node 15192 127.0.0.1 15194 map[build: port:15193 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:56:16 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:56:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:16 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:17 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:36715
2016/03/28 05:56:17 [DEBUG] raft: Node 127.0.0.1:15193 updated peer set (2): [127.0.0.1:15189]
2016/03/28 05:56:17 [INFO] raft: pipelining replication to peer 127.0.0.1:15193
2016/03/28 05:56:17 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15193
2016/03/28 05:56:17 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:17 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:18 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:18 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:18 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:18 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:18 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:18 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:19 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:19 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:19 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:19 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:20 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:20 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:20 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:20 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:21 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:21 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:21 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:21 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:22 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:22 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:22 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:22 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:22 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:22 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:22 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:22 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:23 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/28 05:56:23 [DEBUG] raft-net: 127.0.0.1:15189 accepted connection from: 127.0.0.1:43993
2016/03/28 05:56:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:24 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:24 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 05:56:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:24 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:24 [DEBUG] raft: Vote granted from 127.0.0.1:15193. Tally: 1
2016/03/28 05:56:24 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 05:56:24 [DEBUG] memberlist: Potential blocking operation. Last command took 10.061ms
2016/03/28 05:56:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:24 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:24 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:25 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/03/28 05:56:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:25 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/28 05:56:25 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:25 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:25 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:25 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:25 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:26 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:26 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:26 [DEBUG] raft: Vote granted from 127.0.0.1:15193. Tally: 1
2016/03/28 05:56:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:26 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/28 05:56:26 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:26 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:56:26 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:26 [DEBUG] memberlist: Potential blocking operation. Last command took 10.579ms
2016/03/28 05:56:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:26 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:27 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:27 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:56:27 [DEBUG] raft: Vote granted from 127.0.0.1:15193. Tally: 1
2016/03/28 05:56:27 [INFO] consul: shutting down server
2016/03/28 05:56:27 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:27 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/03/28 05:56:27 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:27 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/03/28 05:56:27 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15193: EOF
2016/03/28 05:56:27 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/03/28 05:56:27 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:27 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/03/28 05:56:27 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/03/28 05:56:27 [INFO] memberlist: Marking Node 15192 as failed, suspect timeout reached
2016/03/28 05:56:27 [INFO] serf: EventMemberFailed: Node 15192 127.0.0.1
2016/03/28 05:56:27 [INFO] consul: removing LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/28 05:56:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:27 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/28 05:56:27 [INFO] consul: shutting down server
2016/03/28 05:56:27 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:27 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15193: EOF
2016/03/28 05:56:28 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:28 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_ConsistentRead_Fail (14.13s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_ConsistentRead
2016/03/28 05:56:29 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/03/28 05:56:29 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/03/28 05:56:29 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/28 05:56:29 [INFO] serf: EventMemberJoin: Node 15196.dc1 127.0.0.1
2016/03/28 05:56:29 [INFO] consul: adding WAN server Node 15196.dc1 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/28 05:56:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:29 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:30 [INFO] raft: Node at 127.0.0.1:15201 [Follower] entering Follower state
2016/03/28 05:56:30 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/03/28 05:56:30 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/28 05:56:30 [INFO] serf: EventMemberJoin: Node 15200.dc1 127.0.0.1
2016/03/28 05:56:30 [INFO] consul: adding WAN server Node 15200.dc1 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/28 05:56:30 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15198
2016/03/28 05:56:30 [DEBUG] memberlist: TCP connection from=127.0.0.1:45716
2016/03/28 05:56:30 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/03/28 05:56:30 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/03/28 05:56:30 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/28 05:56:30 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:56:30 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:30 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:30 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:30 [INFO] raft: Node at 127.0.0.1:15197 [Leader] entering Leader state
2016/03/28 05:56:30 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:30 [INFO] consul: New leader elected: Node 15196
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [INFO] consul: New leader elected: Node 15196
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:30 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15197]
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] serf: messageJoinType: Node 15200
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:56:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:30 [INFO] consul: member 'Node 15196' joined, marking health alive
2016/03/28 05:56:30 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15201 127.0.0.1:15197]
2016/03/28 05:56:30 [INFO] raft: Added peer 127.0.0.1:15201, starting replication
2016/03/28 05:56:30 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:42899
2016/03/28 05:56:30 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:42900
2016/03/28 05:56:30 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:56:30 [DEBUG] raft: Failed to contact 127.0.0.1:15201 in 208.579333ms
2016/03/28 05:56:30 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:56:30 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/03/28 05:56:30 [INFO] consul: cluster leadership lost
2016/03/28 05:56:30 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:56:30 [WARN] raft: AppendEntries to 127.0.0.1:15201 rejected, sending older logs (next: 1)
2016/03/28 05:56:30 [ERR] consul: failed to reconcile member: {Node 15200 127.0.0.1 15202 map[build: port:15201 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:56:30 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:56:30 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:56:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:30 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:31 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:42902
2016/03/28 05:56:31 [DEBUG] raft: Node 127.0.0.1:15201 updated peer set (2): [127.0.0.1:15197]
2016/03/28 05:56:31 [INFO] raft: pipelining replication to peer 127.0.0.1:15201
2016/03/28 05:56:31 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15201
2016/03/28 05:56:31 [DEBUG] memberlist: Potential blocking operation. Last command took 17.346ms
2016/03/28 05:56:31 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:31 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:31 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:31 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:32 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:32 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:32 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:32 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:33 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:33 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:33 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:33 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:33 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:33 [DEBUG] raft-net: 127.0.0.1:15197 accepted connection from: 127.0.0.1:48731
2016/03/28 05:56:34 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:34 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 05:56:34 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:34 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:34 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:34 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:34 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 05:56:34 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:34 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:34 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:35 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:35 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:35 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 05:56:35 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:35 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:35 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:35 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 05:56:35 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:35 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:35 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:36 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:36 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:36 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 05:56:36 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:36 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:36 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:36 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:36 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 05:56:36 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:36 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:37 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:37 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:37 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 05:56:37 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:37 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:37 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:37 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 05:56:37 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:38 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:38 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:38 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:38 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:38 [INFO] raft: Node at 127.0.0.1:15201 [Follower] entering Follower state
2016/03/28 05:56:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:38 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:39 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:39 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 05:56:39 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:39 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:39 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:39 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:39 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 05:56:39 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:39 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:39 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:40 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:40 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:40 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:40 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:40 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:40 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:40 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:56:40 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/03/28 05:56:40 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:40 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/03/28 05:56:41 [INFO] consul: shutting down server
2016/03/28 05:56:41 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:41 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/03/28 05:56:41 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/03/28 05:56:41 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:41 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/03/28 05:56:41 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:56:41 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:41 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:56:41 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/28 05:56:41 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/03/28 05:56:41 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/03/28 05:56:41 [INFO] memberlist: Marking Node 15200 as failed, suspect timeout reached
2016/03/28 05:56:41 [INFO] serf: EventMemberFailed: Node 15200 127.0.0.1
2016/03/28 05:56:41 [INFO] consul: removing LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/28 05:56:41 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:41 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15201: EOF
2016/03/28 05:56:41 [DEBUG] raft: Votes needed: 2
2016/03/28 05:56:41 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:56:41 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15201: EOF
2016/03/28 05:56:41 [INFO] consul: shutting down server
2016/03/28 05:56:41 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:41 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:42 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_ConsistentRead (13.45s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_DistanceSort
2016/03/28 05:56:42 [INFO] raft: Node at 127.0.0.1:15205 [Follower] entering Follower state
2016/03/28 05:56:42 [INFO] serf: EventMemberJoin: Node 15204 127.0.0.1
2016/03/28 05:56:42 [INFO] consul: adding LAN server Node 15204 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/03/28 05:56:42 [INFO] serf: EventMemberJoin: Node 15204.dc1 127.0.0.1
2016/03/28 05:56:42 [INFO] consul: adding WAN server Node 15204.dc1 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/03/28 05:56:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:42 [INFO] raft: Node at 127.0.0.1:15205 [Candidate] entering Candidate state
2016/03/28 05:56:43 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:43 [DEBUG] raft: Vote granted from 127.0.0.1:15205. Tally: 1
2016/03/28 05:56:43 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:43 [INFO] raft: Node at 127.0.0.1:15205 [Leader] entering Leader state
2016/03/28 05:56:43 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:43 [INFO] consul: New leader elected: Node 15204
2016/03/28 05:56:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:43 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15205]
2016/03/28 05:56:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:43 [INFO] consul: member 'Node 15204' joined, marking health alive
2016/03/28 05:56:43 [INFO] consul: shutting down server
2016/03/28 05:56:43 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:43 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:43 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListNodes_DistanceSort (1.78s)
=== RUN   TestCatalogListServices
2016/03/28 05:56:44 [INFO] raft: Node at 127.0.0.1:15209 [Follower] entering Follower state
2016/03/28 05:56:44 [INFO] serf: EventMemberJoin: Node 15208 127.0.0.1
2016/03/28 05:56:44 [INFO] consul: adding LAN server Node 15208 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/03/28 05:56:44 [INFO] serf: EventMemberJoin: Node 15208.dc1 127.0.0.1
2016/03/28 05:56:44 [INFO] consul: adding WAN server Node 15208.dc1 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/03/28 05:56:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:44 [INFO] raft: Node at 127.0.0.1:15209 [Candidate] entering Candidate state
2016/03/28 05:56:44 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:44 [DEBUG] raft: Vote granted from 127.0.0.1:15209. Tally: 1
2016/03/28 05:56:44 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:44 [INFO] raft: Node at 127.0.0.1:15209 [Leader] entering Leader state
2016/03/28 05:56:44 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:44 [INFO] consul: New leader elected: Node 15208
2016/03/28 05:56:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:45 [DEBUG] raft: Node 127.0.0.1:15209 updated peer set (2): [127.0.0.1:15209]
2016/03/28 05:56:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:45 [INFO] consul: member 'Node 15208' joined, marking health alive
2016/03/28 05:56:45 [INFO] consul: shutting down server
2016/03/28 05:56:45 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:45 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:56:45 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServices (1.58s)
=== RUN   TestCatalogListServices_Blocking
2016/03/28 05:56:45 [INFO] raft: Node at 127.0.0.1:15213 [Follower] entering Follower state
2016/03/28 05:56:45 [INFO] serf: EventMemberJoin: Node 15212 127.0.0.1
2016/03/28 05:56:45 [INFO] consul: adding LAN server Node 15212 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/03/28 05:56:45 [INFO] serf: EventMemberJoin: Node 15212.dc1 127.0.0.1
2016/03/28 05:56:45 [INFO] consul: adding WAN server Node 15212.dc1 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/03/28 05:56:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:46 [INFO] raft: Node at 127.0.0.1:15213 [Candidate] entering Candidate state
2016/03/28 05:56:46 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:46 [DEBUG] raft: Vote granted from 127.0.0.1:15213. Tally: 1
2016/03/28 05:56:46 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:46 [INFO] raft: Node at 127.0.0.1:15213 [Leader] entering Leader state
2016/03/28 05:56:46 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:46 [INFO] consul: New leader elected: Node 15212
2016/03/28 05:56:46 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:46 [DEBUG] raft: Node 127.0.0.1:15213 updated peer set (2): [127.0.0.1:15213]
2016/03/28 05:56:46 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:46 [INFO] consul: member 'Node 15212' joined, marking health alive
2016/03/28 05:56:47 [INFO] consul: shutting down server
2016/03/28 05:56:47 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:47 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServices_Blocking (1.72s)
=== RUN   TestCatalogListServices_Timeout
2016/03/28 05:56:47 [INFO] raft: Node at 127.0.0.1:15217 [Follower] entering Follower state
2016/03/28 05:56:47 [INFO] serf: EventMemberJoin: Node 15216 127.0.0.1
2016/03/28 05:56:47 [INFO] consul: adding LAN server Node 15216 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/03/28 05:56:47 [INFO] serf: EventMemberJoin: Node 15216.dc1 127.0.0.1
2016/03/28 05:56:47 [INFO] consul: adding WAN server Node 15216.dc1 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/03/28 05:56:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:47 [INFO] raft: Node at 127.0.0.1:15217 [Candidate] entering Candidate state
2016/03/28 05:56:48 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:48 [DEBUG] raft: Vote granted from 127.0.0.1:15217. Tally: 1
2016/03/28 05:56:48 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:48 [INFO] raft: Node at 127.0.0.1:15217 [Leader] entering Leader state
2016/03/28 05:56:48 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:48 [INFO] consul: New leader elected: Node 15216
2016/03/28 05:56:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:48 [DEBUG] raft: Node 127.0.0.1:15217 updated peer set (2): [127.0.0.1:15217]
2016/03/28 05:56:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:48 [INFO] consul: member 'Node 15216' joined, marking health alive
2016/03/28 05:56:48 [INFO] consul: shutting down server
2016/03/28 05:56:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:48 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServices_Timeout (1.72s)
=== RUN   TestCatalogListServices_Stale
2016/03/28 05:56:49 [INFO] raft: Node at 127.0.0.1:15221 [Follower] entering Follower state
2016/03/28 05:56:49 [INFO] serf: EventMemberJoin: Node 15220 127.0.0.1
2016/03/28 05:56:49 [INFO] consul: adding LAN server Node 15220 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/03/28 05:56:49 [INFO] serf: EventMemberJoin: Node 15220.dc1 127.0.0.1
2016/03/28 05:56:49 [INFO] consul: adding WAN server Node 15220.dc1 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/03/28 05:56:49 [INFO] consul: shutting down server
2016/03/28 05:56:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:49 [INFO] raft: Node at 127.0.0.1:15221 [Candidate] entering Candidate state
2016/03/28 05:56:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:50 [DEBUG] raft: Votes needed: 1
--- PASS: TestCatalogListServices_Stale (1.46s)
=== RUN   TestCatalogListServiceNodes
2016/03/28 05:56:50 [INFO] raft: Node at 127.0.0.1:15225 [Follower] entering Follower state
2016/03/28 05:56:50 [INFO] serf: EventMemberJoin: Node 15224 127.0.0.1
2016/03/28 05:56:50 [INFO] consul: adding LAN server Node 15224 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/03/28 05:56:50 [INFO] serf: EventMemberJoin: Node 15224.dc1 127.0.0.1
2016/03/28 05:56:50 [INFO] consul: adding WAN server Node 15224.dc1 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/03/28 05:56:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:51 [INFO] raft: Node at 127.0.0.1:15225 [Candidate] entering Candidate state
2016/03/28 05:56:51 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:51 [DEBUG] raft: Vote granted from 127.0.0.1:15225. Tally: 1
2016/03/28 05:56:51 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:51 [INFO] raft: Node at 127.0.0.1:15225 [Leader] entering Leader state
2016/03/28 05:56:51 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:51 [INFO] consul: New leader elected: Node 15224
2016/03/28 05:56:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:51 [DEBUG] raft: Node 127.0.0.1:15225 updated peer set (2): [127.0.0.1:15225]
2016/03/28 05:56:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:51 [INFO] consul: member 'Node 15224' joined, marking health alive
2016/03/28 05:56:52 [INFO] consul: shutting down server
2016/03/28 05:56:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:52 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListServiceNodes (2.52s)
=== RUN   TestCatalogListServiceNodes_DistanceSort
2016/03/28 05:56:53 [INFO] raft: Node at 127.0.0.1:15229 [Follower] entering Follower state
2016/03/28 05:56:53 [INFO] serf: EventMemberJoin: Node 15228 127.0.0.1
2016/03/28 05:56:53 [INFO] consul: adding LAN server Node 15228 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/03/28 05:56:53 [INFO] serf: EventMemberJoin: Node 15228.dc1 127.0.0.1
2016/03/28 05:56:53 [INFO] consul: adding WAN server Node 15228.dc1 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/03/28 05:56:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:53 [INFO] raft: Node at 127.0.0.1:15229 [Candidate] entering Candidate state
2016/03/28 05:56:54 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:54 [DEBUG] raft: Vote granted from 127.0.0.1:15229. Tally: 1
2016/03/28 05:56:54 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:54 [INFO] raft: Node at 127.0.0.1:15229 [Leader] entering Leader state
2016/03/28 05:56:54 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:54 [INFO] consul: New leader elected: Node 15228
2016/03/28 05:56:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:54 [DEBUG] raft: Node 127.0.0.1:15229 updated peer set (2): [127.0.0.1:15229]
2016/03/28 05:56:54 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:54 [INFO] consul: member 'Node 15228' joined, marking health alive
2016/03/28 05:56:54 [INFO] consul: shutting down server
2016/03/28 05:56:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:56:54 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServiceNodes_DistanceSort (1.81s)
=== RUN   TestCatalogNodeServices
2016/03/28 05:56:55 [INFO] raft: Node at 127.0.0.1:15233 [Follower] entering Follower state
2016/03/28 05:56:55 [INFO] serf: EventMemberJoin: Node 15232 127.0.0.1
2016/03/28 05:56:55 [INFO] consul: adding LAN server Node 15232 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/03/28 05:56:55 [INFO] serf: EventMemberJoin: Node 15232.dc1 127.0.0.1
2016/03/28 05:56:55 [INFO] consul: adding WAN server Node 15232.dc1 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/03/28 05:56:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:55 [INFO] raft: Node at 127.0.0.1:15233 [Candidate] entering Candidate state
2016/03/28 05:56:55 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:55 [DEBUG] raft: Vote granted from 127.0.0.1:15233. Tally: 1
2016/03/28 05:56:55 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:55 [INFO] raft: Node at 127.0.0.1:15233 [Leader] entering Leader state
2016/03/28 05:56:55 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:55 [INFO] consul: New leader elected: Node 15232
2016/03/28 05:56:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:56 [DEBUG] raft: Node 127.0.0.1:15233 updated peer set (2): [127.0.0.1:15233]
2016/03/28 05:56:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:56 [INFO] consul: member 'Node 15232' joined, marking health alive
2016/03/28 05:56:56 [INFO] consul: shutting down server
2016/03/28 05:56:56 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:56 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:56 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:56:56 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogNodeServices (1.69s)
=== RUN   TestCatalogRegister_FailedCase1
2016/03/28 05:56:57 [INFO] raft: Node at 127.0.0.1:15237 [Follower] entering Follower state
2016/03/28 05:56:57 [INFO] serf: EventMemberJoin: Node 15236 127.0.0.1
2016/03/28 05:56:57 [INFO] consul: adding LAN server Node 15236 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/03/28 05:56:57 [INFO] serf: EventMemberJoin: Node 15236.dc1 127.0.0.1
2016/03/28 05:56:57 [INFO] consul: adding WAN server Node 15236.dc1 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/03/28 05:56:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:57 [INFO] raft: Node at 127.0.0.1:15237 [Candidate] entering Candidate state
2016/03/28 05:56:57 [DEBUG] raft: Votes needed: 1
2016/03/28 05:56:57 [DEBUG] raft: Vote granted from 127.0.0.1:15237. Tally: 1
2016/03/28 05:56:57 [INFO] raft: Election won. Tally: 1
2016/03/28 05:56:57 [INFO] raft: Node at 127.0.0.1:15237 [Leader] entering Leader state
2016/03/28 05:56:57 [INFO] consul: cluster leadership acquired
2016/03/28 05:56:57 [INFO] consul: New leader elected: Node 15236
2016/03/28 05:56:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:56:58 [DEBUG] raft: Node 127.0.0.1:15237 updated peer set (2): [127.0.0.1:15237]
2016/03/28 05:56:58 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:56:58 [INFO] consul: member 'Node 15236' joined, marking health alive
2016/03/28 05:56:59 [INFO] consul: shutting down server
2016/03/28 05:56:59 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:59 [WARN] serf: Shutdown without a Leave
2016/03/28 05:56:59 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogRegister_FailedCase1 (2.89s)
=== RUN   TestCatalog_ListServices_FilterACL
2016/03/28 05:56:59 [INFO] raft: Node at 127.0.0.1:15241 [Follower] entering Follower state
2016/03/28 05:56:59 [INFO] serf: EventMemberJoin: Node 15240 127.0.0.1
2016/03/28 05:56:59 [INFO] consul: adding LAN server Node 15240 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/03/28 05:56:59 [INFO] serf: EventMemberJoin: Node 15240.dc1 127.0.0.1
2016/03/28 05:56:59 [INFO] consul: adding WAN server Node 15240.dc1 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/03/28 05:56:59 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:56:59 [INFO] raft: Node at 127.0.0.1:15241 [Candidate] entering Candidate state
2016/03/28 05:57:00 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:00 [DEBUG] raft: Vote granted from 127.0.0.1:15241. Tally: 1
2016/03/28 05:57:00 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:00 [INFO] raft: Node at 127.0.0.1:15241 [Leader] entering Leader state
2016/03/28 05:57:00 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:00 [INFO] consul: New leader elected: Node 15240
2016/03/28 05:57:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:00 [DEBUG] raft: Node 127.0.0.1:15241 updated peer set (2): [127.0.0.1:15241]
2016/03/28 05:57:00 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:01 [INFO] consul: member 'Node 15240' joined, marking health alive
2016/03/28 05:57:02 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/03/28 05:57:02 [INFO] consul: shutting down server
2016/03/28 05:57:02 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:02 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:02 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalog_ListServices_FilterACL (3.31s)
=== RUN   TestCatalog_ServiceNodes_FilterACL
2016/03/28 05:57:03 [INFO] raft: Node at 127.0.0.1:15245 [Follower] entering Follower state
2016/03/28 05:57:03 [INFO] serf: EventMemberJoin: Node 15244 127.0.0.1
2016/03/28 05:57:03 [INFO] consul: adding LAN server Node 15244 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/03/28 05:57:03 [INFO] serf: EventMemberJoin: Node 15244.dc1 127.0.0.1
2016/03/28 05:57:03 [INFO] consul: adding WAN server Node 15244.dc1 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/03/28 05:57:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:03 [INFO] raft: Node at 127.0.0.1:15245 [Candidate] entering Candidate state
2016/03/28 05:57:03 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:03 [DEBUG] raft: Vote granted from 127.0.0.1:15245. Tally: 1
2016/03/28 05:57:03 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:03 [INFO] raft: Node at 127.0.0.1:15245 [Leader] entering Leader state
2016/03/28 05:57:03 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:03 [INFO] consul: New leader elected: Node 15244
2016/03/28 05:57:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:03 [DEBUG] raft: Node 127.0.0.1:15245 updated peer set (2): [127.0.0.1:15245]
2016/03/28 05:57:03 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:04 [INFO] consul: member 'Node 15244' joined, marking health alive
2016/03/28 05:57:05 [DEBUG] consul: dropping node "Node 15244" from result due to ACLs
2016/03/28 05:57:05 [INFO] consul: shutting down server
2016/03/28 05:57:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:06 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 05:57:06 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalog_ServiceNodes_FilterACL (3.39s)
=== RUN   TestCatalog_NodeServices_FilterACL
2016/03/28 05:57:07 [INFO] raft: Node at 127.0.0.1:15249 [Follower] entering Follower state
2016/03/28 05:57:07 [INFO] serf: EventMemberJoin: Node 15248 127.0.0.1
2016/03/28 05:57:07 [INFO] consul: adding LAN server Node 15248 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/03/28 05:57:07 [INFO] serf: EventMemberJoin: Node 15248.dc1 127.0.0.1
2016/03/28 05:57:07 [INFO] consul: adding WAN server Node 15248.dc1 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/03/28 05:57:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:07 [INFO] raft: Node at 127.0.0.1:15249 [Candidate] entering Candidate state
2016/03/28 05:57:07 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:07 [DEBUG] raft: Vote granted from 127.0.0.1:15249. Tally: 1
2016/03/28 05:57:07 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:07 [INFO] raft: Node at 127.0.0.1:15249 [Leader] entering Leader state
2016/03/28 05:57:07 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:07 [INFO] consul: New leader elected: Node 15248
2016/03/28 05:57:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:07 [DEBUG] raft: Node 127.0.0.1:15249 updated peer set (2): [127.0.0.1:15249]
2016/03/28 05:57:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:08 [INFO] consul: member 'Node 15248' joined, marking health alive
2016/03/28 05:57:09 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/03/28 05:57:09 [INFO] consul: shutting down server
2016/03/28 05:57:09 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:09 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:10 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:10 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalog_NodeServices_FilterACL (4.05s)
=== RUN   TestClient_StartStop
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:10 [INFO] consul: shutting down client
2016/03/28 05:57:10 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_StartStop (0.07s)
=== RUN   TestClient_JoinLAN
2016/03/28 05:57:10 [INFO] raft: Node at 127.0.0.1:15255 [Follower] entering Follower state
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/03/28 05:57:10 [INFO] consul: adding LAN server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: Node 15254.dc1 127.0.0.1
2016/03/28 05:57:10 [INFO] consul: adding WAN server Node 15254.dc1 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:10 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15256
2016/03/28 05:57:10 [DEBUG] memberlist: TCP connection from=127.0.0.1:50138
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:10 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/03/28 05:57:10 [INFO] consul: adding server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/28 05:57:10 [INFO] consul: shutting down client
2016/03/28 05:57:10 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:10 [INFO] raft: Node at 127.0.0.1:15255 [Candidate] entering Candidate state
2016/03/28 05:57:11 [INFO] consul: shutting down server
2016/03/28 05:57:11 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:11 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:11 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN (1.40s)
=== RUN   TestClient_JoinLAN_Invalid
2016/03/28 05:57:11 [INFO] raft: Node at 127.0.0.1:15261 [Follower] entering Follower state
2016/03/28 05:57:11 [INFO] serf: EventMemberJoin: Node 15260 127.0.0.1
2016/03/28 05:57:11 [INFO] consul: adding LAN server Node 15260 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/03/28 05:57:11 [INFO] serf: EventMemberJoin: Node 15260.dc1 127.0.0.1
2016/03/28 05:57:12 [INFO] consul: adding WAN server Node 15260.dc1 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/03/28 05:57:12 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:12 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15262
2016/03/28 05:57:12 [DEBUG] memberlist: TCP connection from=127.0.0.1:56160
2016/03/28 05:57:12 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' part of wrong datacenter 'other' from=127.0.0.1:56160
2016/03/28 05:57:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:12 [INFO] raft: Node at 127.0.0.1:15261 [Candidate] entering Candidate state
2016/03/28 05:57:12 [INFO] consul: shutting down client
2016/03/28 05:57:12 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:12 [INFO] consul: shutting down server
2016/03/28 05:57:12 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:12 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:12 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN_Invalid (1.14s)
=== RUN   TestClient_JoinWAN_Invalid
2016/03/28 05:57:13 [INFO] raft: Node at 127.0.0.1:15267 [Follower] entering Follower state
2016/03/28 05:57:13 [INFO] serf: EventMemberJoin: Node 15266 127.0.0.1
2016/03/28 05:57:13 [INFO] consul: adding LAN server Node 15266 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/03/28 05:57:13 [INFO] serf: EventMemberJoin: Node 15266.dc1 127.0.0.1
2016/03/28 05:57:13 [INFO] consul: adding WAN server Node 15266.dc1 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/03/28 05:57:13 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:13 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15269
2016/03/28 05:57:13 [DEBUG] memberlist: TCP connection from=127.0.0.1:45620
2016/03/28 05:57:13 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' is not a server from=127.0.0.1:45620
2016/03/28 05:57:13 [INFO] consul: shutting down client
2016/03/28 05:57:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:13 [INFO] raft: Node at 127.0.0.1:15267 [Candidate] entering Candidate state
2016/03/28 05:57:13 [INFO] consul: shutting down server
2016/03/28 05:57:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:13 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:13 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinWAN_Invalid (1.14s)
=== RUN   TestClient_RPC
2016/03/28 05:57:14 [INFO] raft: Node at 127.0.0.1:15273 [Follower] entering Follower state
2016/03/28 05:57:14 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/03/28 05:57:14 [INFO] consul: adding LAN server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/28 05:57:14 [INFO] serf: EventMemberJoin: Node 15272.dc1 127.0.0.1
2016/03/28 05:57:14 [INFO] consul: adding WAN server Node 15272.dc1 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/28 05:57:14 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15274
2016/03/28 05:57:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:40160
2016/03/28 05:57:14 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:14 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/03/28 05:57:14 [INFO] consul: adding server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/28 05:57:14 [INFO] consul: shutting down client
2016/03/28 05:57:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:14 [INFO] raft: Node at 127.0.0.1:15273 [Candidate] entering Candidate state
2016/03/28 05:57:14 [INFO] consul: shutting down server
2016/03/28 05:57:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:14 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:57:14 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:57:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:14 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:57:14 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:57:15 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC (1.16s)
=== RUN   TestClient_RPC_Pool
2016/03/28 05:57:15 [INFO] raft: Node at 127.0.0.1:15279 [Follower] entering Follower state
2016/03/28 05:57:15 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/03/28 05:57:15 [INFO] consul: adding LAN server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/28 05:57:15 [INFO] serf: EventMemberJoin: Node 15278.dc1 127.0.0.1
2016/03/28 05:57:15 [INFO] consul: adding WAN server Node 15278.dc1 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/28 05:57:15 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15280
2016/03/28 05:57:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:54856
2016/03/28 05:57:15 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:15 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/03/28 05:57:15 [INFO] consul: adding server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/28 05:57:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:15 [INFO] raft: Node at 127.0.0.1:15279 [Candidate] entering Candidate state
2016/03/28 05:57:15 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:57:15 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:57:15 [INFO] consul: shutting down client
2016/03/28 05:57:15 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:16 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:57:16 [INFO] consul: shutting down server
2016/03/28 05:57:16 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:16 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:57:16 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:16 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:57:16 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:57:16 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC_Pool (1.49s)
=== RUN   TestClient_RPC_TLS
2016/03/28 05:57:16 [INFO] raft: Node at 127.0.0.1:15284 [Follower] entering Follower state
2016/03/28 05:57:16 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/28 05:57:16 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/28 05:57:16 [INFO] serf: EventMemberJoin: a.testco.internal.dc1 127.0.0.1
2016/03/28 05:57:16 [INFO] consul: adding WAN server a.testco.internal.dc1 (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/28 05:57:16 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/28 05:57:16 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/03/28 05:57:16 [DEBUG] memberlist: TCP connection from=127.0.0.1:54385
2016/03/28 05:57:16 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/28 05:57:16 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/28 05:57:16 [INFO] consul: adding server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/28 05:57:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:17 [INFO] raft: Node at 127.0.0.1:15284 [Candidate] entering Candidate state
2016/03/28 05:57:17 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 05:57:17 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 05:57:17 [INFO] consul: shutting down client
2016/03/28 05:57:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:17 [INFO] consul: shutting down server
2016/03/28 05:57:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:17 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC_TLS (1.15s)
=== RUN   TestClientServer_UserEvent
2016/03/28 05:57:17 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/03/28 05:57:18 [INFO] raft: Node at 127.0.0.1:15293 [Follower] entering Follower state
2016/03/28 05:57:18 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/03/28 05:57:18 [INFO] consul: adding LAN server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/28 05:57:18 [INFO] serf: EventMemberJoin: Node 15292.dc1 127.0.0.1
2016/03/28 05:57:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15294
2016/03/28 05:57:18 [INFO] consul: adding WAN server Node 15292.dc1 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/28 05:57:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:46130
2016/03/28 05:57:18 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/03/28 05:57:18 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/03/28 05:57:18 [INFO] consul: adding server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:18 [INFO] raft: Node at 127.0.0.1:15293 [Candidate] entering Candidate state
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] serf: messageJoinType: Client 15289
2016/03/28 05:57:18 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:18 [DEBUG] raft: Vote granted from 127.0.0.1:15293. Tally: 1
2016/03/28 05:57:18 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:18 [INFO] raft: Node at 127.0.0.1:15293 [Leader] entering Leader state
2016/03/28 05:57:18 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:18 [INFO] consul: New leader elected: Node 15292
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [INFO] consul: New leader elected: Node 15292
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:57:19 [DEBUG] raft: Node 127.0.0.1:15293 updated peer set (2): [127.0.0.1:15293]
2016/03/28 05:57:19 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:19 [INFO] consul: member 'Node 15292' joined, marking health alive
2016/03/28 05:57:19 [INFO] consul: member 'Client 15289' joined, marking health alive
2016/03/28 05:57:19 [DEBUG] consul: user event: foo
2016/03/28 05:57:19 [DEBUG] serf: messageUserEventType: consul:event:foo
2016/03/28 05:57:19 [DEBUG] consul: user event: foo
2016/03/28 05:57:19 [INFO] consul: shutting down server
2016/03/28 05:57:19 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:19 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:19 [INFO] consul: shutting down client
2016/03/28 05:57:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientServer_UserEvent (1.77s)
=== RUN   TestClient_Encrypted
2016/03/28 05:57:19 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:57:19 [INFO] serf: EventMemberJoin: Client 15298 127.0.0.1
2016/03/28 05:57:19 [INFO] consul: shutting down client
2016/03/28 05:57:19 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:19 [INFO] consul: shutting down client
2016/03/28 05:57:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_Encrypted (0.14s)
=== RUN   TestCoordinate_Update
2016/03/28 05:57:20 [INFO] raft: Node at 127.0.0.1:15302 [Follower] entering Follower state
2016/03/28 05:57:20 [INFO] serf: EventMemberJoin: Node 15301 127.0.0.1
2016/03/28 05:57:20 [INFO] consul: adding LAN server Node 15301 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/03/28 05:57:20 [INFO] serf: EventMemberJoin: Node 15301.dc1 127.0.0.1
2016/03/28 05:57:20 [INFO] consul: adding WAN server Node 15301.dc1 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/03/28 05:57:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:20 [INFO] raft: Node at 127.0.0.1:15302 [Candidate] entering Candidate state
2016/03/28 05:57:20 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:20 [DEBUG] raft: Vote granted from 127.0.0.1:15302. Tally: 1
2016/03/28 05:57:20 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:20 [INFO] raft: Node at 127.0.0.1:15302 [Leader] entering Leader state
2016/03/28 05:57:20 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:20 [INFO] consul: New leader elected: Node 15301
2016/03/28 05:57:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:20 [DEBUG] raft: Node 127.0.0.1:15302 updated peer set (2): [127.0.0.1:15302]
2016/03/28 05:57:20 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:20 [INFO] consul: member 'Node 15301' joined, marking health alive
2016/03/28 05:57:28 [WARN] consul.coordinate: Discarded 1 coordinate updates
2016/03/28 05:57:29 [INFO] consul: shutting down server
2016/03/28 05:57:29 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:29 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:29 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:29 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCoordinate_Update (9.80s)
=== RUN   TestCoordinate_ListDatacenters
2016/03/28 05:57:29 [INFO] raft: Node at 127.0.0.1:15306 [Follower] entering Follower state
2016/03/28 05:57:29 [INFO] serf: EventMemberJoin: Node 15305 127.0.0.1
2016/03/28 05:57:29 [INFO] consul: adding LAN server Node 15305 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/03/28 05:57:29 [INFO] serf: EventMemberJoin: Node 15305.dc1 127.0.0.1
2016/03/28 05:57:29 [INFO] consul: adding WAN server Node 15305.dc1 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/03/28 05:57:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:30 [INFO] raft: Node at 127.0.0.1:15306 [Candidate] entering Candidate state
2016/03/28 05:57:30 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:30 [DEBUG] raft: Vote granted from 127.0.0.1:15306. Tally: 1
2016/03/28 05:57:30 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:30 [INFO] raft: Node at 127.0.0.1:15306 [Leader] entering Leader state
2016/03/28 05:57:30 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:30 [INFO] consul: New leader elected: Node 15305
2016/03/28 05:57:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:30 [DEBUG] raft: Node 127.0.0.1:15306 updated peer set (2): [127.0.0.1:15306]
2016/03/28 05:57:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:30 [INFO] consul: member 'Node 15305' joined, marking health alive
2016/03/28 05:57:31 [INFO] consul: shutting down server
2016/03/28 05:57:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:31 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCoordinate_ListDatacenters (1.86s)
=== RUN   TestCoordinate_ListNodes
2016/03/28 05:57:31 [INFO] raft: Node at 127.0.0.1:15310 [Follower] entering Follower state
2016/03/28 05:57:31 [INFO] serf: EventMemberJoin: Node 15309 127.0.0.1
2016/03/28 05:57:31 [INFO] consul: adding LAN server Node 15309 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/03/28 05:57:31 [INFO] serf: EventMemberJoin: Node 15309.dc1 127.0.0.1
2016/03/28 05:57:31 [INFO] consul: adding WAN server Node 15309.dc1 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/03/28 05:57:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:31 [INFO] raft: Node at 127.0.0.1:15310 [Candidate] entering Candidate state
2016/03/28 05:57:32 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:32 [DEBUG] raft: Vote granted from 127.0.0.1:15310. Tally: 1
2016/03/28 05:57:32 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:32 [INFO] raft: Node at 127.0.0.1:15310 [Leader] entering Leader state
2016/03/28 05:57:32 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:32 [INFO] consul: New leader elected: Node 15309
2016/03/28 05:57:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:32 [DEBUG] raft: Node 127.0.0.1:15310 updated peer set (2): [127.0.0.1:15310]
2016/03/28 05:57:32 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:32 [INFO] consul: member 'Node 15309' joined, marking health alive
2016/03/28 05:57:34 [INFO] consul: shutting down server
2016/03/28 05:57:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:35 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:35 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 05:57:35 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestCoordinate_ListNodes (4.21s)
	coordinate_endpoint_test.go:286: bad: []
=== RUN   TestFilterDirEnt
--- PASS: TestFilterDirEnt (0.00s)
=== RUN   TestKeys
--- PASS: TestKeys (0.00s)
=== RUN   TestFSM_RegisterNode
--- PASS: TestFSM_RegisterNode (0.01s)
=== RUN   TestFSM_RegisterNode_Service
--- PASS: TestFSM_RegisterNode_Service (0.00s)
=== RUN   TestFSM_DeregisterService
--- PASS: TestFSM_DeregisterService (0.00s)
=== RUN   TestFSM_DeregisterCheck
--- PASS: TestFSM_DeregisterCheck (0.00s)
=== RUN   TestFSM_DeregisterNode
--- PASS: TestFSM_DeregisterNode (0.00s)
=== RUN   TestFSM_SnapshotRestore
2016/03/28 05:57:35 [INFO] consul.fsm: snapshot created in 127.667µs
--- PASS: TestFSM_SnapshotRestore (0.01s)
=== RUN   TestFSM_KVSSet
--- PASS: TestFSM_KVSSet (0.00s)
=== RUN   TestFSM_KVSDelete
--- PASS: TestFSM_KVSDelete (0.00s)
=== RUN   TestFSM_KVSDeleteTree
--- PASS: TestFSM_KVSDeleteTree (0.00s)
=== RUN   TestFSM_KVSDeleteCheckAndSet
--- PASS: TestFSM_KVSDeleteCheckAndSet (0.00s)
=== RUN   TestFSM_KVSCheckAndSet
--- PASS: TestFSM_KVSCheckAndSet (0.00s)
=== RUN   TestFSM_CoordinateUpdate
--- PASS: TestFSM_CoordinateUpdate (0.00s)
=== RUN   TestFSM_SessionCreate_Destroy
--- PASS: TestFSM_SessionCreate_Destroy (0.00s)
=== RUN   TestFSM_KVSLock
--- PASS: TestFSM_KVSLock (0.00s)
=== RUN   TestFSM_KVSUnlock
--- PASS: TestFSM_KVSUnlock (0.00s)
=== RUN   TestFSM_ACL_Set_Delete
--- PASS: TestFSM_ACL_Set_Delete (0.00s)
=== RUN   TestFSM_PreparedQuery_CRUD
--- PASS: TestFSM_PreparedQuery_CRUD (0.00s)
=== RUN   TestFSM_TombstoneReap
--- PASS: TestFSM_TombstoneReap (0.00s)
=== RUN   TestFSM_IgnoreUnknown
2016/03/28 05:57:35 [WARN] consul.fsm: ignoring unknown message type (64), upgrade to newer version
--- PASS: TestFSM_IgnoreUnknown (0.00s)
=== RUN   TestHealth_ChecksInState
2016/03/28 05:57:36 [INFO] raft: Node at 127.0.0.1:15314 [Follower] entering Follower state
2016/03/28 05:57:36 [INFO] serf: EventMemberJoin: Node 15313 127.0.0.1
2016/03/28 05:57:36 [INFO] consul: adding LAN server Node 15313 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/03/28 05:57:36 [INFO] serf: EventMemberJoin: Node 15313.dc1 127.0.0.1
2016/03/28 05:57:36 [INFO] consul: adding WAN server Node 15313.dc1 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/03/28 05:57:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:36 [INFO] raft: Node at 127.0.0.1:15314 [Candidate] entering Candidate state
2016/03/28 05:57:36 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:36 [DEBUG] raft: Vote granted from 127.0.0.1:15314. Tally: 1
2016/03/28 05:57:36 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:36 [INFO] raft: Node at 127.0.0.1:15314 [Leader] entering Leader state
2016/03/28 05:57:36 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:36 [INFO] consul: New leader elected: Node 15313
2016/03/28 05:57:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:36 [DEBUG] raft: Node 127.0.0.1:15314 updated peer set (2): [127.0.0.1:15314]
2016/03/28 05:57:37 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:37 [INFO] consul: member 'Node 15313' joined, marking health alive
2016/03/28 05:57:37 [INFO] consul: shutting down server
2016/03/28 05:57:37 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:38 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:38 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:38 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ChecksInState (2.61s)
=== RUN   TestHealth_ChecksInState_DistanceSort
2016/03/28 05:57:38 [INFO] raft: Node at 127.0.0.1:15318 [Follower] entering Follower state
2016/03/28 05:57:38 [INFO] serf: EventMemberJoin: Node 15317 127.0.0.1
2016/03/28 05:57:38 [INFO] consul: adding LAN server Node 15317 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/03/28 05:57:38 [INFO] serf: EventMemberJoin: Node 15317.dc1 127.0.0.1
2016/03/28 05:57:38 [INFO] consul: adding WAN server Node 15317.dc1 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/03/28 05:57:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:38 [INFO] raft: Node at 127.0.0.1:15318 [Candidate] entering Candidate state
2016/03/28 05:57:39 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:39 [DEBUG] raft: Vote granted from 127.0.0.1:15318. Tally: 1
2016/03/28 05:57:39 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:39 [INFO] raft: Node at 127.0.0.1:15318 [Leader] entering Leader state
2016/03/28 05:57:39 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:39 [INFO] consul: New leader elected: Node 15317
2016/03/28 05:57:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:39 [DEBUG] raft: Node 127.0.0.1:15318 updated peer set (2): [127.0.0.1:15318]
2016/03/28 05:57:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:39 [INFO] consul: member 'Node 15317' joined, marking health alive
2016/03/28 05:57:40 [INFO] consul: shutting down server
2016/03/28 05:57:40 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:40 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:40 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:40 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ChecksInState_DistanceSort (2.62s)
=== RUN   TestHealth_NodeChecks
2016/03/28 05:57:41 [INFO] raft: Node at 127.0.0.1:15322 [Follower] entering Follower state
2016/03/28 05:57:41 [INFO] serf: EventMemberJoin: Node 15321 127.0.0.1
2016/03/28 05:57:41 [INFO] consul: adding LAN server Node 15321 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/03/28 05:57:41 [INFO] serf: EventMemberJoin: Node 15321.dc1 127.0.0.1
2016/03/28 05:57:41 [INFO] consul: adding WAN server Node 15321.dc1 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/03/28 05:57:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:41 [INFO] raft: Node at 127.0.0.1:15322 [Candidate] entering Candidate state
2016/03/28 05:57:41 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:41 [DEBUG] raft: Vote granted from 127.0.0.1:15322. Tally: 1
2016/03/28 05:57:41 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:41 [INFO] raft: Node at 127.0.0.1:15322 [Leader] entering Leader state
2016/03/28 05:57:41 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:41 [INFO] consul: New leader elected: Node 15321
2016/03/28 05:57:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:41 [DEBUG] raft: Node 127.0.0.1:15322 updated peer set (2): [127.0.0.1:15322]
2016/03/28 05:57:42 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:42 [INFO] consul: member 'Node 15321' joined, marking health alive
2016/03/28 05:57:42 [INFO] consul: shutting down server
2016/03/28 05:57:42 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:42 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:42 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_NodeChecks (2.03s)
=== RUN   TestHealth_ServiceChecks
2016/03/28 05:57:43 [INFO] raft: Node at 127.0.0.1:15326 [Follower] entering Follower state
2016/03/28 05:57:43 [INFO] serf: EventMemberJoin: Node 15325 127.0.0.1
2016/03/28 05:57:43 [INFO] consul: adding LAN server Node 15325 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/03/28 05:57:43 [INFO] serf: EventMemberJoin: Node 15325.dc1 127.0.0.1
2016/03/28 05:57:43 [INFO] consul: adding WAN server Node 15325.dc1 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/03/28 05:57:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:43 [INFO] raft: Node at 127.0.0.1:15326 [Candidate] entering Candidate state
2016/03/28 05:57:44 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:44 [DEBUG] raft: Vote granted from 127.0.0.1:15326. Tally: 1
2016/03/28 05:57:44 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:44 [INFO] raft: Node at 127.0.0.1:15326 [Leader] entering Leader state
2016/03/28 05:57:44 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:44 [INFO] consul: New leader elected: Node 15325
2016/03/28 05:57:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:44 [DEBUG] raft: Node 127.0.0.1:15326 updated peer set (2): [127.0.0.1:15326]
2016/03/28 05:57:44 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:44 [INFO] consul: member 'Node 15325' joined, marking health alive
2016/03/28 05:57:45 [INFO] consul: shutting down server
2016/03/28 05:57:45 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:45 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:45 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceChecks (2.53s)
=== RUN   TestHealth_ServiceChecks_DistanceSort
2016/03/28 05:57:45 [INFO] raft: Node at 127.0.0.1:15330 [Follower] entering Follower state
2016/03/28 05:57:45 [INFO] serf: EventMemberJoin: Node 15329 127.0.0.1
2016/03/28 05:57:45 [INFO] consul: adding LAN server Node 15329 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/03/28 05:57:45 [INFO] serf: EventMemberJoin: Node 15329.dc1 127.0.0.1
2016/03/28 05:57:45 [INFO] consul: adding WAN server Node 15329.dc1 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/03/28 05:57:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:45 [INFO] raft: Node at 127.0.0.1:15330 [Candidate] entering Candidate state
2016/03/28 05:57:46 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:46 [DEBUG] raft: Vote granted from 127.0.0.1:15330. Tally: 1
2016/03/28 05:57:46 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:46 [INFO] raft: Node at 127.0.0.1:15330 [Leader] entering Leader state
2016/03/28 05:57:46 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:46 [INFO] consul: New leader elected: Node 15329
2016/03/28 05:57:46 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:46 [DEBUG] raft: Node 127.0.0.1:15330 updated peer set (2): [127.0.0.1:15330]
2016/03/28 05:57:46 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:46 [INFO] consul: member 'Node 15329' joined, marking health alive
2016/03/28 05:57:47 [INFO] consul: shutting down server
2016/03/28 05:57:47 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:47 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:47 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:47 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceChecks_DistanceSort (2.52s)
=== RUN   TestHealth_ServiceNodes
2016/03/28 05:57:48 [INFO] raft: Node at 127.0.0.1:15334 [Follower] entering Follower state
2016/03/28 05:57:48 [INFO] serf: EventMemberJoin: Node 15333 127.0.0.1
2016/03/28 05:57:48 [INFO] consul: adding LAN server Node 15333 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/03/28 05:57:48 [INFO] serf: EventMemberJoin: Node 15333.dc1 127.0.0.1
2016/03/28 05:57:48 [INFO] consul: adding WAN server Node 15333.dc1 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/03/28 05:57:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:48 [INFO] raft: Node at 127.0.0.1:15334 [Candidate] entering Candidate state
2016/03/28 05:57:48 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:48 [DEBUG] raft: Vote granted from 127.0.0.1:15334. Tally: 1
2016/03/28 05:57:48 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:48 [INFO] raft: Node at 127.0.0.1:15334 [Leader] entering Leader state
2016/03/28 05:57:48 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:48 [INFO] consul: New leader elected: Node 15333
2016/03/28 05:57:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:49 [DEBUG] raft: Node 127.0.0.1:15334 updated peer set (2): [127.0.0.1:15334]
2016/03/28 05:57:49 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:49 [INFO] consul: member 'Node 15333' joined, marking health alive
2016/03/28 05:57:50 [INFO] consul: shutting down server
2016/03/28 05:57:50 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:51 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:57:51 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceNodes (3.37s)
=== RUN   TestHealth_ServiceNodes_DistanceSort
2016/03/28 05:57:51 [INFO] raft: Node at 127.0.0.1:15338 [Follower] entering Follower state
2016/03/28 05:57:51 [INFO] serf: EventMemberJoin: Node 15337 127.0.0.1
2016/03/28 05:57:51 [INFO] consul: adding LAN server Node 15337 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/03/28 05:57:51 [INFO] serf: EventMemberJoin: Node 15337.dc1 127.0.0.1
2016/03/28 05:57:51 [INFO] consul: adding WAN server Node 15337.dc1 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/03/28 05:57:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:51 [INFO] raft: Node at 127.0.0.1:15338 [Candidate] entering Candidate state
2016/03/28 05:57:52 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:52 [DEBUG] raft: Vote granted from 127.0.0.1:15338. Tally: 1
2016/03/28 05:57:52 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:52 [INFO] raft: Node at 127.0.0.1:15338 [Leader] entering Leader state
2016/03/28 05:57:52 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:52 [INFO] consul: New leader elected: Node 15337
2016/03/28 05:57:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:52 [DEBUG] raft: Node 127.0.0.1:15338 updated peer set (2): [127.0.0.1:15338]
2016/03/28 05:57:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:52 [INFO] consul: member 'Node 15337' joined, marking health alive
2016/03/28 05:57:53 [INFO] consul: shutting down server
2016/03/28 05:57:53 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:53 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceNodes_DistanceSort (3.07s)
=== RUN   TestHealth_NodeChecks_FilterACL
2016/03/28 05:57:54 [INFO] raft: Node at 127.0.0.1:15342 [Follower] entering Follower state
2016/03/28 05:57:54 [INFO] serf: EventMemberJoin: Node 15341 127.0.0.1
2016/03/28 05:57:54 [INFO] consul: adding LAN server Node 15341 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/03/28 05:57:54 [INFO] serf: EventMemberJoin: Node 15341.dc1 127.0.0.1
2016/03/28 05:57:54 [INFO] consul: adding WAN server Node 15341.dc1 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/03/28 05:57:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:54 [INFO] raft: Node at 127.0.0.1:15342 [Candidate] entering Candidate state
2016/03/28 05:57:55 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:55 [DEBUG] raft: Vote granted from 127.0.0.1:15342. Tally: 1
2016/03/28 05:57:55 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:55 [INFO] raft: Node at 127.0.0.1:15342 [Leader] entering Leader state
2016/03/28 05:57:55 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:55 [INFO] consul: New leader elected: Node 15341
2016/03/28 05:57:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:55 [DEBUG] raft: Node 127.0.0.1:15342 updated peer set (2): [127.0.0.1:15342]
2016/03/28 05:57:55 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:57:55 [INFO] consul: member 'Node 15341' joined, marking health alive
2016/03/28 05:57:57 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/28 05:57:57 [INFO] consul: shutting down server
2016/03/28 05:57:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:57 [WARN] serf: Shutdown without a Leave
2016/03/28 05:57:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_NodeChecks_FilterACL (3.50s)
=== RUN   TestHealth_ServiceChecks_FilterACL
2016/03/28 05:57:58 [INFO] raft: Node at 127.0.0.1:15346 [Follower] entering Follower state
2016/03/28 05:57:58 [INFO] serf: EventMemberJoin: Node 15345 127.0.0.1
2016/03/28 05:57:58 [INFO] consul: adding LAN server Node 15345 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/03/28 05:57:58 [INFO] serf: EventMemberJoin: Node 15345.dc1 127.0.0.1
2016/03/28 05:57:58 [INFO] consul: adding WAN server Node 15345.dc1 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/03/28 05:57:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:57:58 [INFO] raft: Node at 127.0.0.1:15346 [Candidate] entering Candidate state
2016/03/28 05:57:59 [DEBUG] raft: Votes needed: 1
2016/03/28 05:57:59 [DEBUG] raft: Vote granted from 127.0.0.1:15346. Tally: 1
2016/03/28 05:57:59 [INFO] raft: Election won. Tally: 1
2016/03/28 05:57:59 [INFO] raft: Node at 127.0.0.1:15346 [Leader] entering Leader state
2016/03/28 05:57:59 [INFO] consul: cluster leadership acquired
2016/03/28 05:57:59 [INFO] consul: New leader elected: Node 15345
2016/03/28 05:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:57:59 [DEBUG] raft: Node 127.0.0.1:15346 updated peer set (2): [127.0.0.1:15346]
2016/03/28 05:57:59 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:00 [INFO] consul: member 'Node 15345' joined, marking health alive
2016/03/28 05:58:01 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/28 05:58:01 [INFO] consul: shutting down server
2016/03/28 05:58:01 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:02 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:02 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceChecks_FilterACL (4.37s)
=== RUN   TestHealth_ServiceNodes_FilterACL
2016/03/28 05:58:02 [INFO] raft: Node at 127.0.0.1:15350 [Follower] entering Follower state
2016/03/28 05:58:02 [INFO] serf: EventMemberJoin: Node 15349 127.0.0.1
2016/03/28 05:58:02 [INFO] consul: adding LAN server Node 15349 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/03/28 05:58:02 [INFO] serf: EventMemberJoin: Node 15349.dc1 127.0.0.1
2016/03/28 05:58:02 [INFO] consul: adding WAN server Node 15349.dc1 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/03/28 05:58:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:02 [INFO] raft: Node at 127.0.0.1:15350 [Candidate] entering Candidate state
2016/03/28 05:58:03 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15350. Tally: 1
2016/03/28 05:58:03 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:03 [INFO] raft: Node at 127.0.0.1:15350 [Leader] entering Leader state
2016/03/28 05:58:03 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:03 [INFO] consul: New leader elected: Node 15349
2016/03/28 05:58:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:03 [DEBUG] raft: Node 127.0.0.1:15350 updated peer set (2): [127.0.0.1:15350]
2016/03/28 05:58:03 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:03 [INFO] consul: member 'Node 15349' joined, marking health alive
2016/03/28 05:58:05 [DEBUG] consul: dropping node "Node 15349" from result due to ACLs
2016/03/28 05:58:05 [INFO] consul: shutting down server
2016/03/28 05:58:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:05 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceNodes_FilterACL (3.75s)
=== RUN   TestHealth_ChecksInState_FilterACL
2016/03/28 05:58:06 [INFO] raft: Node at 127.0.0.1:15354 [Follower] entering Follower state
2016/03/28 05:58:06 [INFO] serf: EventMemberJoin: Node 15353 127.0.0.1
2016/03/28 05:58:06 [INFO] consul: adding LAN server Node 15353 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/03/28 05:58:06 [INFO] serf: EventMemberJoin: Node 15353.dc1 127.0.0.1
2016/03/28 05:58:06 [INFO] consul: adding WAN server Node 15353.dc1 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/03/28 05:58:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:06 [INFO] raft: Node at 127.0.0.1:15354 [Candidate] entering Candidate state
2016/03/28 05:58:07 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:07 [DEBUG] raft: Vote granted from 127.0.0.1:15354. Tally: 1
2016/03/28 05:58:07 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:07 [INFO] raft: Node at 127.0.0.1:15354 [Leader] entering Leader state
2016/03/28 05:58:07 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:07 [INFO] consul: New leader elected: Node 15353
2016/03/28 05:58:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:07 [DEBUG] raft: Node 127.0.0.1:15354 updated peer set (2): [127.0.0.1:15354]
2016/03/28 05:58:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:08 [INFO] consul: member 'Node 15353' joined, marking health alive
2016/03/28 05:58:09 [INFO] consul: shutting down server
2016/03/28 05:58:09 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:09 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:09 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ChecksInState_FilterACL (4.10s)
=== RUN   TestInternal_NodeInfo
2016/03/28 05:58:10 [INFO] raft: Node at 127.0.0.1:15358 [Follower] entering Follower state
2016/03/28 05:58:10 [INFO] serf: EventMemberJoin: Node 15357 127.0.0.1
2016/03/28 05:58:10 [INFO] consul: adding LAN server Node 15357 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/03/28 05:58:10 [INFO] serf: EventMemberJoin: Node 15357.dc1 127.0.0.1
2016/03/28 05:58:10 [INFO] consul: adding WAN server Node 15357.dc1 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/03/28 05:58:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:10 [INFO] raft: Node at 127.0.0.1:15358 [Candidate] entering Candidate state
2016/03/28 05:58:11 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:11 [DEBUG] raft: Vote granted from 127.0.0.1:15358. Tally: 1
2016/03/28 05:58:11 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:11 [INFO] raft: Node at 127.0.0.1:15358 [Leader] entering Leader state
2016/03/28 05:58:11 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:11 [INFO] consul: New leader elected: Node 15357
2016/03/28 05:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:11 [DEBUG] raft: Node 127.0.0.1:15358 updated peer set (2): [127.0.0.1:15358]
2016/03/28 05:58:11 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:11 [INFO] consul: member 'Node 15357' joined, marking health alive
2016/03/28 05:58:11 [INFO] consul: shutting down server
2016/03/28 05:58:11 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestInternal_NodeInfo (2.15s)
=== RUN   TestInternal_NodeDump
2016/03/28 05:58:12 [INFO] raft: Node at 127.0.0.1:15362 [Follower] entering Follower state
2016/03/28 05:58:12 [INFO] serf: EventMemberJoin: Node 15361 127.0.0.1
2016/03/28 05:58:12 [INFO] consul: adding LAN server Node 15361 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/03/28 05:58:12 [INFO] serf: EventMemberJoin: Node 15361.dc1 127.0.0.1
2016/03/28 05:58:12 [INFO] consul: adding WAN server Node 15361.dc1 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/03/28 05:58:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:12 [INFO] raft: Node at 127.0.0.1:15362 [Candidate] entering Candidate state
2016/03/28 05:58:13 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:13 [DEBUG] raft: Vote granted from 127.0.0.1:15362. Tally: 1
2016/03/28 05:58:13 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:13 [INFO] raft: Node at 127.0.0.1:15362 [Leader] entering Leader state
2016/03/28 05:58:13 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:13 [INFO] consul: New leader elected: Node 15361
2016/03/28 05:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:13 [DEBUG] raft: Node 127.0.0.1:15362 updated peer set (2): [127.0.0.1:15362]
2016/03/28 05:58:13 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:13 [INFO] consul: member 'Node 15361' joined, marking health alive
2016/03/28 05:58:14 [INFO] consul: shutting down server
2016/03/28 05:58:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:15 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_NodeDump (3.02s)
=== RUN   TestInternal_KeyringOperation
2016/03/28 05:58:15 [INFO] raft: Node at 127.0.0.1:15366 [Follower] entering Follower state
2016/03/28 05:58:15 [INFO] serf: EventMemberJoin: Node 15365 127.0.0.1
2016/03/28 05:58:15 [INFO] consul: adding LAN server Node 15365 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/28 05:58:15 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/03/28 05:58:15 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/28 05:58:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:15 [INFO] raft: Node at 127.0.0.1:15366 [Candidate] entering Candidate state
2016/03/28 05:58:16 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:16 [DEBUG] raft: Vote granted from 127.0.0.1:15366. Tally: 1
2016/03/28 05:58:16 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:16 [INFO] raft: Node at 127.0.0.1:15366 [Leader] entering Leader state
2016/03/28 05:58:16 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:16 [INFO] consul: New leader elected: Node 15365
2016/03/28 05:58:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:16 [DEBUG] raft: Node 127.0.0.1:15366 updated peer set (2): [127.0.0.1:15366]
2016/03/28 05:58:16 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:16 [INFO] consul: member 'Node 15365' joined, marking health alive
2016/03/28 05:58:16 [INFO] serf: Received list-keys query
2016/03/28 05:58:16 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/03/28 05:58:16 [INFO] serf: Received list-keys query
2016/03/28 05:58:16 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/03/28 05:58:17 [INFO] raft: Node at 127.0.0.1:15370 [Follower] entering Follower state
2016/03/28 05:58:17 [INFO] serf: EventMemberJoin: Node 15369 127.0.0.1
2016/03/28 05:58:17 [INFO] consul: adding LAN server Node 15369 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/28 05:58:17 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/03/28 05:58:17 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15368
2016/03/28 05:58:17 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/28 05:58:17 [DEBUG] memberlist: TCP connection from=127.0.0.1:41722
2016/03/28 05:58:17 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/03/28 05:58:17 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/28 05:58:17 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/03/28 05:58:17 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/28 05:58:17 [INFO] serf: Received list-keys query
2016/03/28 05:58:17 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [INFO] serf: Received list-keys query
2016/03/28 05:58:17 [INFO] serf: Received list-keys query
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/03/28 05:58:17 [INFO] serf: Received list-keys query
2016/03/28 05:58:17 [INFO] serf: Received list-keys query
2016/03/28 05:58:17 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/03/28 05:58:17 [DEBUG] serf: messageQueryResponseType: Node 15369
2016/03/28 05:58:17 [INFO] consul: shutting down server
2016/03/28 05:58:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:17 [INFO] raft: Node at 127.0.0.1:15370 [Candidate] entering Candidate state
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/28 05:58:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:17 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/03/28 05:58:17 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/03/28 05:58:18 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/03/28 05:58:18 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/03/28 05:58:18 [INFO] memberlist: Marking Node 15369.dc2 as failed, suspect timeout reached
2016/03/28 05:58:18 [INFO] serf: EventMemberFailed: Node 15369.dc2 127.0.0.1
2016/03/28 05:58:18 [INFO] consul: removing WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/28 05:58:18 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:18 [INFO] consul: shutting down server
2016/03/28 05:58:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:18 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:18 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:18 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_KeyringOperation (3.46s)
=== RUN   TestInternal_NodeInfo_FilterACL
2016/03/28 05:58:19 [INFO] raft: Node at 127.0.0.1:15374 [Follower] entering Follower state
2016/03/28 05:58:19 [INFO] serf: EventMemberJoin: Node 15373 127.0.0.1
2016/03/28 05:58:19 [INFO] consul: adding LAN server Node 15373 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/03/28 05:58:19 [INFO] serf: EventMemberJoin: Node 15373.dc1 127.0.0.1
2016/03/28 05:58:19 [INFO] consul: adding WAN server Node 15373.dc1 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/03/28 05:58:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:19 [INFO] raft: Node at 127.0.0.1:15374 [Candidate] entering Candidate state
2016/03/28 05:58:19 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:19 [DEBUG] raft: Vote granted from 127.0.0.1:15374. Tally: 1
2016/03/28 05:58:19 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:19 [INFO] raft: Node at 127.0.0.1:15374 [Leader] entering Leader state
2016/03/28 05:58:19 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:19 [INFO] consul: New leader elected: Node 15373
2016/03/28 05:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:19 [DEBUG] raft: Node 127.0.0.1:15374 updated peer set (2): [127.0.0.1:15374]
2016/03/28 05:58:19 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:20 [INFO] consul: member 'Node 15373' joined, marking health alive
2016/03/28 05:58:21 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/28 05:58:21 [INFO] consul: shutting down server
2016/03/28 05:58:21 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:21 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:22 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:22 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_NodeInfo_FilterACL (3.40s)
=== RUN   TestInternal_NodeDump_FilterACL
2016/03/28 05:58:22 [INFO] raft: Node at 127.0.0.1:15378 [Follower] entering Follower state
2016/03/28 05:58:22 [INFO] serf: EventMemberJoin: Node 15377 127.0.0.1
2016/03/28 05:58:22 [INFO] consul: adding LAN server Node 15377 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/03/28 05:58:22 [INFO] serf: EventMemberJoin: Node 15377.dc1 127.0.0.1
2016/03/28 05:58:22 [INFO] consul: adding WAN server Node 15377.dc1 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/03/28 05:58:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:22 [INFO] raft: Node at 127.0.0.1:15378 [Candidate] entering Candidate state
2016/03/28 05:58:23 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:23 [DEBUG] raft: Vote granted from 127.0.0.1:15378. Tally: 1
2016/03/28 05:58:23 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:23 [INFO] raft: Node at 127.0.0.1:15378 [Leader] entering Leader state
2016/03/28 05:58:23 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:23 [INFO] consul: New leader elected: Node 15377
2016/03/28 05:58:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:23 [DEBUG] raft: Node 127.0.0.1:15378 updated peer set (2): [127.0.0.1:15378]
2016/03/28 05:58:23 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:23 [INFO] consul: member 'Node 15377' joined, marking health alive
2016/03/28 05:58:25 [INFO] consul: shutting down server
2016/03/28 05:58:25 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:25 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:25 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_NodeDump_FilterACL (3.81s)
=== RUN   TestInternal_EventFire_Token
2016/03/28 05:58:26 [INFO] raft: Node at 127.0.0.1:15382 [Follower] entering Follower state
2016/03/28 05:58:26 [INFO] serf: EventMemberJoin: Node 15381 127.0.0.1
2016/03/28 05:58:26 [INFO] consul: adding LAN server Node 15381 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/03/28 05:58:26 [INFO] serf: EventMemberJoin: Node 15381.dc1 127.0.0.1
2016/03/28 05:58:26 [INFO] consul: adding WAN server Node 15381.dc1 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/03/28 05:58:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:26 [INFO] raft: Node at 127.0.0.1:15382 [Candidate] entering Candidate state
2016/03/28 05:58:26 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:26 [DEBUG] raft: Vote granted from 127.0.0.1:15382. Tally: 1
2016/03/28 05:58:26 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:26 [INFO] raft: Node at 127.0.0.1:15382 [Leader] entering Leader state
2016/03/28 05:58:26 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:26 [INFO] consul: New leader elected: Node 15381
2016/03/28 05:58:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:27 [DEBUG] raft: Node 127.0.0.1:15382 updated peer set (2): [127.0.0.1:15382]
2016/03/28 05:58:27 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:27 [INFO] consul: member 'Node 15381' joined, marking health alive
2016/03/28 05:58:27 [WARN] consul: user event "foo" blocked by ACLs
2016/03/28 05:58:27 [DEBUG] consul: user event: foo
2016/03/28 05:58:27 [INFO] consul: shutting down server
2016/03/28 05:58:27 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:27 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:27 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_EventFire_Token (2.14s)
=== RUN   TestHealthCheckRace
--- PASS: TestHealthCheckRace (0.00s)
=== RUN   TestKVS_Apply
2016/03/28 05:58:28 [INFO] raft: Node at 127.0.0.1:15386 [Follower] entering Follower state
2016/03/28 05:58:28 [INFO] serf: EventMemberJoin: Node 15385 127.0.0.1
2016/03/28 05:58:28 [INFO] consul: adding LAN server Node 15385 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/03/28 05:58:28 [INFO] serf: EventMemberJoin: Node 15385.dc1 127.0.0.1
2016/03/28 05:58:28 [INFO] consul: adding WAN server Node 15385.dc1 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/03/28 05:58:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:29 [INFO] raft: Node at 127.0.0.1:15386 [Candidate] entering Candidate state
2016/03/28 05:58:29 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:29 [DEBUG] raft: Vote granted from 127.0.0.1:15386. Tally: 1
2016/03/28 05:58:29 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:29 [INFO] raft: Node at 127.0.0.1:15386 [Leader] entering Leader state
2016/03/28 05:58:29 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:29 [INFO] consul: New leader elected: Node 15385
2016/03/28 05:58:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:30 [DEBUG] raft: Node 127.0.0.1:15386 updated peer set (2): [127.0.0.1:15386]
2016/03/28 05:58:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:30 [INFO] consul: member 'Node 15385' joined, marking health alive
2016/03/28 05:58:31 [INFO] consul: shutting down server
2016/03/28 05:58:31 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:31 [WARN] serf: Shutdown without a Leave
--- PASS: TestKVS_Apply (3.17s)
=== RUN   TestKVS_Apply_ACLDeny
2016/03/28 05:58:32 [INFO] raft: Node at 127.0.0.1:15390 [Follower] entering Follower state
2016/03/28 05:58:32 [INFO] serf: EventMemberJoin: Node 15389 127.0.0.1
2016/03/28 05:58:32 [INFO] consul: adding LAN server Node 15389 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/03/28 05:58:32 [INFO] serf: EventMemberJoin: Node 15389.dc1 127.0.0.1
2016/03/28 05:58:32 [INFO] consul: adding WAN server Node 15389.dc1 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/03/28 05:58:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:32 [INFO] raft: Node at 127.0.0.1:15390 [Candidate] entering Candidate state
2016/03/28 05:58:32 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:32 [DEBUG] raft: Vote granted from 127.0.0.1:15390. Tally: 1
2016/03/28 05:58:32 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:32 [INFO] raft: Node at 127.0.0.1:15390 [Leader] entering Leader state
2016/03/28 05:58:32 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:32 [INFO] consul: New leader elected: Node 15389
2016/03/28 05:58:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:33 [DEBUG] raft: Node 127.0.0.1:15390 updated peer set (2): [127.0.0.1:15390]
2016/03/28 05:58:33 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:33 [INFO] consul: member 'Node 15389' joined, marking health alive
2016/03/28 05:58:34 [INFO] consul: shutting down server
2016/03/28 05:58:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:34 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:34 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Apply_ACLDeny (3.54s)
=== RUN   TestKVS_Get
2016/03/28 05:58:35 [INFO] raft: Node at 127.0.0.1:15394 [Follower] entering Follower state
2016/03/28 05:58:35 [INFO] serf: EventMemberJoin: Node 15393 127.0.0.1
2016/03/28 05:58:35 [INFO] consul: adding LAN server Node 15393 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/03/28 05:58:35 [INFO] serf: EventMemberJoin: Node 15393.dc1 127.0.0.1
2016/03/28 05:58:35 [INFO] consul: adding WAN server Node 15393.dc1 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/03/28 05:58:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:35 [INFO] raft: Node at 127.0.0.1:15394 [Candidate] entering Candidate state
2016/03/28 05:58:35 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:35 [DEBUG] raft: Vote granted from 127.0.0.1:15394. Tally: 1
2016/03/28 05:58:35 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:35 [INFO] raft: Node at 127.0.0.1:15394 [Leader] entering Leader state
2016/03/28 05:58:35 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:35 [INFO] consul: New leader elected: Node 15393
2016/03/28 05:58:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:36 [DEBUG] raft: Node 127.0.0.1:15394 updated peer set (2): [127.0.0.1:15394]
2016/03/28 05:58:36 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:36 [INFO] consul: member 'Node 15393' joined, marking health alive
2016/03/28 05:58:36 [INFO] consul: shutting down server
2016/03/28 05:58:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:36 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:36 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:36 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Get (2.19s)
=== RUN   TestKVS_Get_ACLDeny
2016/03/28 05:58:37 [INFO] raft: Node at 127.0.0.1:15398 [Follower] entering Follower state
2016/03/28 05:58:37 [INFO] serf: EventMemberJoin: Node 15397 127.0.0.1
2016/03/28 05:58:37 [INFO] consul: adding LAN server Node 15397 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/03/28 05:58:37 [INFO] serf: EventMemberJoin: Node 15397.dc1 127.0.0.1
2016/03/28 05:58:37 [INFO] consul: adding WAN server Node 15397.dc1 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/03/28 05:58:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:37 [INFO] raft: Node at 127.0.0.1:15398 [Candidate] entering Candidate state
2016/03/28 05:58:38 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:38 [DEBUG] raft: Vote granted from 127.0.0.1:15398. Tally: 1
2016/03/28 05:58:38 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:38 [INFO] raft: Node at 127.0.0.1:15398 [Leader] entering Leader state
2016/03/28 05:58:38 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:38 [INFO] consul: New leader elected: Node 15397
2016/03/28 05:58:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:38 [DEBUG] raft: Node 127.0.0.1:15398 updated peer set (2): [127.0.0.1:15398]
2016/03/28 05:58:38 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:39 [INFO] consul: member 'Node 15397' joined, marking health alive
2016/03/28 05:58:39 [INFO] consul: shutting down server
2016/03/28 05:58:39 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:40 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:40 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Get_ACLDeny (3.33s)
=== RUN   TestKVSEndpoint_List
2016/03/28 05:58:40 [INFO] raft: Node at 127.0.0.1:15402 [Follower] entering Follower state
2016/03/28 05:58:40 [INFO] serf: EventMemberJoin: Node 15401 127.0.0.1
2016/03/28 05:58:40 [INFO] consul: adding LAN server Node 15401 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/03/28 05:58:40 [INFO] serf: EventMemberJoin: Node 15401.dc1 127.0.0.1
2016/03/28 05:58:40 [INFO] consul: adding WAN server Node 15401.dc1 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/03/28 05:58:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:40 [INFO] raft: Node at 127.0.0.1:15402 [Candidate] entering Candidate state
2016/03/28 05:58:41 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:41 [DEBUG] raft: Vote granted from 127.0.0.1:15402. Tally: 1
2016/03/28 05:58:41 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:41 [INFO] raft: Node at 127.0.0.1:15402 [Leader] entering Leader state
2016/03/28 05:58:41 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:41 [INFO] consul: New leader elected: Node 15401
2016/03/28 05:58:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:41 [DEBUG] raft: Node 127.0.0.1:15402 updated peer set (2): [127.0.0.1:15402]
2016/03/28 05:58:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:41 [INFO] consul: member 'Node 15401' joined, marking health alive
2016/03/28 05:58:42 [INFO] consul: shutting down server
2016/03/28 05:58:42 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:42 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:42 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_List (2.76s)
=== RUN   TestKVSEndpoint_List_Blocking
2016/03/28 05:58:44 [INFO] raft: Node at 127.0.0.1:15406 [Follower] entering Follower state
2016/03/28 05:58:44 [INFO] serf: EventMemberJoin: Node 15405 127.0.0.1
2016/03/28 05:58:44 [INFO] consul: adding LAN server Node 15405 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/03/28 05:58:44 [INFO] serf: EventMemberJoin: Node 15405.dc1 127.0.0.1
2016/03/28 05:58:44 [INFO] consul: adding WAN server Node 15405.dc1 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/03/28 05:58:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:44 [INFO] raft: Node at 127.0.0.1:15406 [Candidate] entering Candidate state
2016/03/28 05:58:44 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:44 [DEBUG] raft: Vote granted from 127.0.0.1:15406. Tally: 1
2016/03/28 05:58:44 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:44 [INFO] raft: Node at 127.0.0.1:15406 [Leader] entering Leader state
2016/03/28 05:58:44 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:44 [INFO] consul: New leader elected: Node 15405
2016/03/28 05:58:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:45 [DEBUG] raft: Node 127.0.0.1:15406 updated peer set (2): [127.0.0.1:15406]
2016/03/28 05:58:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:45 [INFO] consul: member 'Node 15405' joined, marking health alive
2016/03/28 05:58:46 [INFO] consul: shutting down server
2016/03/28 05:58:46 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:46 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:46 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_List_Blocking (3.71s)
=== RUN   TestKVSEndpoint_List_ACLDeny
2016/03/28 05:58:47 [INFO] raft: Node at 127.0.0.1:15410 [Follower] entering Follower state
2016/03/28 05:58:47 [INFO] serf: EventMemberJoin: Node 15409 127.0.0.1
2016/03/28 05:58:47 [INFO] consul: adding LAN server Node 15409 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/03/28 05:58:47 [INFO] serf: EventMemberJoin: Node 15409.dc1 127.0.0.1
2016/03/28 05:58:47 [INFO] consul: adding WAN server Node 15409.dc1 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/03/28 05:58:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:47 [INFO] raft: Node at 127.0.0.1:15410 [Candidate] entering Candidate state
2016/03/28 05:58:47 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:47 [DEBUG] raft: Vote granted from 127.0.0.1:15410. Tally: 1
2016/03/28 05:58:47 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:47 [INFO] raft: Node at 127.0.0.1:15410 [Leader] entering Leader state
2016/03/28 05:58:47 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:47 [INFO] consul: New leader elected: Node 15409
2016/03/28 05:58:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:47 [DEBUG] raft: Node 127.0.0.1:15410 updated peer set (2): [127.0.0.1:15410]
2016/03/28 05:58:47 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:48 [INFO] consul: member 'Node 15409' joined, marking health alive
2016/03/28 05:58:51 [INFO] consul: shutting down server
2016/03/28 05:58:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:51 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_List_ACLDeny (5.17s)
=== RUN   TestKVSEndpoint_ListKeys
2016/03/28 05:58:52 [INFO] raft: Node at 127.0.0.1:15414 [Follower] entering Follower state
2016/03/28 05:58:52 [INFO] serf: EventMemberJoin: Node 15413 127.0.0.1
2016/03/28 05:58:52 [INFO] consul: adding LAN server Node 15413 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/03/28 05:58:52 [INFO] serf: EventMemberJoin: Node 15413.dc1 127.0.0.1
2016/03/28 05:58:52 [INFO] consul: adding WAN server Node 15413.dc1 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/03/28 05:58:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:52 [INFO] raft: Node at 127.0.0.1:15414 [Candidate] entering Candidate state
2016/03/28 05:58:52 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:52 [DEBUG] raft: Vote granted from 127.0.0.1:15414. Tally: 1
2016/03/28 05:58:52 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:52 [INFO] raft: Node at 127.0.0.1:15414 [Leader] entering Leader state
2016/03/28 05:58:52 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:52 [INFO] consul: New leader elected: Node 15413
2016/03/28 05:58:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:53 [DEBUG] raft: Node 127.0.0.1:15414 updated peer set (2): [127.0.0.1:15414]
2016/03/28 05:58:53 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:53 [INFO] consul: member 'Node 15413' joined, marking health alive
2016/03/28 05:58:54 [INFO] consul: shutting down server
2016/03/28 05:58:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:54 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_ListKeys (2.96s)
=== RUN   TestKVSEndpoint_ListKeys_ACLDeny
2016/03/28 05:58:55 [INFO] raft: Node at 127.0.0.1:15418 [Follower] entering Follower state
2016/03/28 05:58:55 [INFO] serf: EventMemberJoin: Node 15417 127.0.0.1
2016/03/28 05:58:55 [INFO] consul: adding LAN server Node 15417 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/03/28 05:58:55 [INFO] serf: EventMemberJoin: Node 15417.dc1 127.0.0.1
2016/03/28 05:58:55 [INFO] consul: adding WAN server Node 15417.dc1 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/03/28 05:58:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:58:55 [INFO] raft: Node at 127.0.0.1:15418 [Candidate] entering Candidate state
2016/03/28 05:58:55 [DEBUG] raft: Votes needed: 1
2016/03/28 05:58:55 [DEBUG] raft: Vote granted from 127.0.0.1:15418. Tally: 1
2016/03/28 05:58:55 [INFO] raft: Election won. Tally: 1
2016/03/28 05:58:55 [INFO] raft: Node at 127.0.0.1:15418 [Leader] entering Leader state
2016/03/28 05:58:55 [INFO] consul: cluster leadership acquired
2016/03/28 05:58:55 [INFO] consul: New leader elected: Node 15417
2016/03/28 05:58:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:58:56 [DEBUG] raft: Node 127.0.0.1:15418 updated peer set (2): [127.0.0.1:15418]
2016/03/28 05:58:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:58:56 [INFO] consul: member 'Node 15417' joined, marking health alive
2016/03/28 05:58:59 [INFO] consul: shutting down server
2016/03/28 05:58:59 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:59 [WARN] serf: Shutdown without a Leave
2016/03/28 05:58:59 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:58:59 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_ListKeys_ACLDeny (5.06s)
=== RUN   TestKVS_Apply_LockDelay
2016/03/28 05:59:01 [INFO] raft: Node at 127.0.0.1:15422 [Follower] entering Follower state
2016/03/28 05:59:01 [INFO] serf: EventMemberJoin: Node 15421 127.0.0.1
2016/03/28 05:59:01 [INFO] consul: adding LAN server Node 15421 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/03/28 05:59:01 [INFO] serf: EventMemberJoin: Node 15421.dc1 127.0.0.1
2016/03/28 05:59:01 [INFO] consul: adding WAN server Node 15421.dc1 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/03/28 05:59:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:01 [INFO] raft: Node at 127.0.0.1:15422 [Candidate] entering Candidate state
2016/03/28 05:59:01 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:01 [DEBUG] raft: Vote granted from 127.0.0.1:15422. Tally: 1
2016/03/28 05:59:01 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:01 [INFO] raft: Node at 127.0.0.1:15422 [Leader] entering Leader state
2016/03/28 05:59:01 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:01 [INFO] consul: New leader elected: Node 15421
2016/03/28 05:59:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:02 [DEBUG] raft: Node 127.0.0.1:15422 updated peer set (2): [127.0.0.1:15422]
2016/03/28 05:59:02 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:02 [INFO] consul: member 'Node 15421' joined, marking health alive
2016/03/28 05:59:02 [WARN] consul.kvs: Rejecting lock of test due to lock-delay until 2016-03-28 05:59:02.386036619 +0000 UTC
2016/03/28 05:59:02 [INFO] consul: shutting down server
2016/03/28 05:59:02 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:02 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:02 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:59:02 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Apply_LockDelay (2.88s)
=== RUN   TestLeader_RegisterMember
2016/03/28 05:59:03 [INFO] raft: Node at 127.0.0.1:15426 [Follower] entering Follower state
2016/03/28 05:59:03 [INFO] serf: EventMemberJoin: Node 15425 127.0.0.1
2016/03/28 05:59:03 [INFO] consul: adding LAN server Node 15425 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/28 05:59:03 [INFO] serf: EventMemberJoin: Node 15425.dc1 127.0.0.1
2016/03/28 05:59:03 [INFO] consul: adding WAN server Node 15425.dc1 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/28 05:59:03 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15427
2016/03/28 05:59:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:54339
2016/03/28 05:59:03 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:03 [INFO] serf: EventMemberJoin: Node 15425 127.0.0.1
2016/03/28 05:59:03 [INFO] consul: adding server Node 15425 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/28 05:59:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:03 [INFO] raft: Node at 127.0.0.1:15426 [Candidate] entering Candidate state
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:03 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:03 [DEBUG] raft: Vote granted from 127.0.0.1:15426. Tally: 1
2016/03/28 05:59:03 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:03 [INFO] raft: Node at 127.0.0.1:15426 [Leader] entering Leader state
2016/03/28 05:59:03 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:03 [INFO] consul: New leader elected: Node 15425
2016/03/28 05:59:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:03 [INFO] consul: New leader elected: Node 15425
2016/03/28 05:59:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:04 [DEBUG] raft: Node 127.0.0.1:15426 updated peer set (2): [127.0.0.1:15426]
2016/03/28 05:59:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:04 [INFO] consul: member 'Node 15425' joined, marking health alive
2016/03/28 05:59:04 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/28 05:59:04 [INFO] consul: shutting down client
2016/03/28 05:59:04 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:04 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:59:04 [INFO] consul: shutting down server
2016/03/28 05:59:04 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:04 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:59:04 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:04 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:59:04 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:59:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:59:05 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_RegisterMember (2.47s)
=== RUN   TestLeader_FailedMember
2016/03/28 05:59:06 [INFO] raft: Node at 127.0.0.1:15432 [Follower] entering Follower state
2016/03/28 05:59:06 [INFO] serf: EventMemberJoin: Node 15431 127.0.0.1
2016/03/28 05:59:06 [INFO] consul: adding LAN server Node 15431 (Addr: 127.0.0.1:15432) (DC: dc1)
2016/03/28 05:59:06 [INFO] serf: EventMemberJoin: Node 15431.dc1 127.0.0.1
2016/03/28 05:59:06 [INFO] consul: adding WAN server Node 15431.dc1 (Addr: 127.0.0.1:15432) (DC: dc1)
2016/03/28 05:59:06 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:06 [INFO] raft: Node at 127.0.0.1:15432 [Candidate] entering Candidate state
2016/03/28 05:59:06 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:06 [DEBUG] raft: Vote granted from 127.0.0.1:15432. Tally: 1
2016/03/28 05:59:06 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:06 [INFO] raft: Node at 127.0.0.1:15432 [Leader] entering Leader state
2016/03/28 05:59:06 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:06 [INFO] consul: New leader elected: Node 15431
2016/03/28 05:59:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:07 [DEBUG] raft: Node 127.0.0.1:15432 updated peer set (2): [127.0.0.1:15432]
2016/03/28 05:59:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:07 [INFO] consul: member 'Node 15431' joined, marking health alive
2016/03/28 05:59:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15433
2016/03/28 05:59:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:41825
2016/03/28 05:59:07 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:07 [INFO] serf: EventMemberJoin: Node 15431 127.0.0.1
2016/03/28 05:59:07 [INFO] consul: shutting down client
2016/03/28 05:59:07 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:07 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:59:07 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:59:07 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/28 05:59:07 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:59:07 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:59:07 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:59:07 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:59:07 [INFO] consul: member 'testco.internal' failed, marking health critical
2016/03/28 05:59:08 [INFO] consul: shutting down client
2016/03/28 05:59:08 [INFO] consul: shutting down server
2016/03/28 05:59:08 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:08 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:09 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 05:59:09 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_FailedMember (3.83s)
=== RUN   TestLeader_LeftMember
2016/03/28 05:59:09 [INFO] raft: Node at 127.0.0.1:15438 [Follower] entering Follower state
2016/03/28 05:59:09 [INFO] serf: EventMemberJoin: Node 15437 127.0.0.1
2016/03/28 05:59:09 [INFO] consul: adding LAN server Node 15437 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/28 05:59:09 [INFO] serf: EventMemberJoin: Node 15437.dc1 127.0.0.1
2016/03/28 05:59:09 [INFO] consul: adding WAN server Node 15437.dc1 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/28 05:59:09 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:09 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15439
2016/03/28 05:59:09 [DEBUG] memberlist: TCP connection from=127.0.0.1:43803
2016/03/28 05:59:09 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:09 [INFO] serf: EventMemberJoin: Node 15437 127.0.0.1
2016/03/28 05:59:09 [INFO] consul: adding server Node 15437 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/28 05:59:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:09 [INFO] raft: Node at 127.0.0.1:15438 [Candidate] entering Candidate state
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:09 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:10 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:10 [DEBUG] raft: Vote granted from 127.0.0.1:15438. Tally: 1
2016/03/28 05:59:10 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:10 [INFO] raft: Node at 127.0.0.1:15438 [Leader] entering Leader state
2016/03/28 05:59:10 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:10 [INFO] consul: New leader elected: Node 15437
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [INFO] consul: New leader elected: Node 15437
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:10 [DEBUG] raft: Node 127.0.0.1:15438 updated peer set (2): [127.0.0.1:15438]
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:10 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:10 [INFO] consul: member 'Node 15437' joined, marking health alive
2016/03/28 05:59:10 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/28 05:59:11 [INFO] consul: client starting leave
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/28 05:59:11 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/03/28 05:59:11 [INFO] consul: member 'testco.internal' left, deregistering
2016/03/28 05:59:11 [INFO] consul: shutting down client
2016/03/28 05:59:11 [INFO] consul: shutting down client
2016/03/28 05:59:11 [INFO] consul: shutting down server
2016/03/28 05:59:11 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:11 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:12 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_LeftMember (3.19s)
=== RUN   TestLeader_ReapMember
2016/03/28 05:59:12 [INFO] raft: Node at 127.0.0.1:15444 [Follower] entering Follower state
2016/03/28 05:59:12 [INFO] serf: EventMemberJoin: Node 15443 127.0.0.1
2016/03/28 05:59:12 [INFO] consul: adding LAN server Node 15443 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/28 05:59:12 [INFO] serf: EventMemberJoin: Node 15443.dc1 127.0.0.1
2016/03/28 05:59:12 [INFO] consul: adding WAN server Node 15443.dc1 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/28 05:59:12 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:12 [DEBUG] memberlist: TCP connection from=127.0.0.1:48366
2016/03/28 05:59:12 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15445
2016/03/28 05:59:12 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:12 [INFO] serf: EventMemberJoin: Node 15443 127.0.0.1
2016/03/28 05:59:12 [INFO] consul: adding server Node 15443 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/28 05:59:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:12 [INFO] raft: Node at 127.0.0.1:15444 [Candidate] entering Candidate state
2016/03/28 05:59:12 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:12 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:12 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:12 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:13 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:13 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:13 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:13 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:13 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:13 [DEBUG] raft: Vote granted from 127.0.0.1:15444. Tally: 1
2016/03/28 05:59:13 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:13 [INFO] raft: Node at 127.0.0.1:15444 [Leader] entering Leader state
2016/03/28 05:59:13 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:13 [INFO] consul: New leader elected: Node 15443
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [INFO] consul: New leader elected: Node 15443
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:13 [DEBUG] raft: Node 127.0.0.1:15444 updated peer set (2): [127.0.0.1:15444]
2016/03/28 05:59:13 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:13 [INFO] consul: member 'Node 15443' joined, marking health alive
2016/03/28 05:59:13 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/28 05:59:14 [INFO] consul: member 'testco.internal' reaped, deregistering
2016/03/28 05:59:14 [INFO] consul: shutting down client
2016/03/28 05:59:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:14 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:59:14 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:59:14 [INFO] consul: shutting down server
2016/03/28 05:59:14 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:14 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:59:14 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:59:15 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:15 [INFO] consul: member 'testco.internal' failed, marking health critical
2016/03/28 05:59:15 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/28 05:59:15 [ERR] consul: failed to reconcile member: {testco.internal 127.0.0.1 15448 map[build: role:node dc:dc1 vsn:2 vsn_min:1 vsn_max:3] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:59:15 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestLeader_ReapMember (3.09s)
=== RUN   TestLeader_Reconcile_ReapMember
2016/03/28 05:59:15 [INFO] raft: Node at 127.0.0.1:15450 [Follower] entering Follower state
2016/03/28 05:59:15 [INFO] serf: EventMemberJoin: Node 15449 127.0.0.1
2016/03/28 05:59:15 [INFO] consul: adding LAN server Node 15449 (Addr: 127.0.0.1:15450) (DC: dc1)
2016/03/28 05:59:15 [INFO] serf: EventMemberJoin: Node 15449.dc1 127.0.0.1
2016/03/28 05:59:15 [INFO] consul: adding WAN server Node 15449.dc1 (Addr: 127.0.0.1:15450) (DC: dc1)
2016/03/28 05:59:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:15 [INFO] raft: Node at 127.0.0.1:15450 [Candidate] entering Candidate state
2016/03/28 05:59:16 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:16 [DEBUG] raft: Vote granted from 127.0.0.1:15450. Tally: 1
2016/03/28 05:59:16 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:16 [INFO] raft: Node at 127.0.0.1:15450 [Leader] entering Leader state
2016/03/28 05:59:16 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:16 [INFO] consul: New leader elected: Node 15449
2016/03/28 05:59:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:16 [DEBUG] raft: Node 127.0.0.1:15450 updated peer set (2): [127.0.0.1:15450]
2016/03/28 05:59:16 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:16 [INFO] consul: member 'Node 15449' joined, marking health alive
2016/03/28 05:59:17 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/03/28 05:59:17 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/03/28 05:59:17 [INFO] consul: shutting down server
2016/03/28 05:59:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:17 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:17 [ERR] consul.catalog: Deregister failed: leadership lost while committing log
2016/03/28 05:59:17 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:59:17 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_Reconcile_ReapMember (2.47s)
=== RUN   TestLeader_Reconcile
2016/03/28 05:59:18 [INFO] raft: Node at 127.0.0.1:15454 [Follower] entering Follower state
2016/03/28 05:59:18 [INFO] serf: EventMemberJoin: Node 15453 127.0.0.1
2016/03/28 05:59:18 [INFO] consul: adding LAN server Node 15453 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/28 05:59:18 [INFO] serf: EventMemberJoin: Node 15453.dc1 127.0.0.1
2016/03/28 05:59:18 [INFO] consul: adding WAN server Node 15453.dc1 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/28 05:59:18 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:50289
2016/03/28 05:59:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15455
2016/03/28 05:59:18 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/28 05:59:18 [INFO] serf: EventMemberJoin: Node 15453 127.0.0.1
2016/03/28 05:59:18 [INFO] consul: adding server Node 15453 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/28 05:59:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:18 [INFO] raft: Node at 127.0.0.1:15454 [Candidate] entering Candidate state
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] serf: messageJoinType: testco.internal
2016/03/28 05:59:18 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:18 [DEBUG] raft: Vote granted from 127.0.0.1:15454. Tally: 1
2016/03/28 05:59:18 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:18 [INFO] raft: Node at 127.0.0.1:15454 [Leader] entering Leader state
2016/03/28 05:59:18 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:18 [INFO] consul: New leader elected: Node 15453
2016/03/28 05:59:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:18 [INFO] consul: New leader elected: Node 15453
2016/03/28 05:59:18 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:19 [DEBUG] raft: Node 127.0.0.1:15454 updated peer set (2): [127.0.0.1:15454]
2016/03/28 05:59:19 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:19 [INFO] consul: member 'Node 15453' joined, marking health alive
2016/03/28 05:59:19 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/28 05:59:20 [INFO] consul: shutting down client
2016/03/28 05:59:20 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:20 [INFO] consul: shutting down server
2016/03/28 05:59:20 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:20 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/28 05:59:20 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/28 05:59:20 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_Reconcile (2.48s)
=== RUN   TestLeader_LeftServer
2016/03/28 05:59:20 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/28 05:59:20 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/28 05:59:20 [INFO] raft: Node at 127.0.0.1:15460 [Follower] entering Follower state
2016/03/28 05:59:20 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/28 05:59:20 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/28 05:59:20 [INFO] serf: EventMemberJoin: Node 15459.dc1 127.0.0.1
2016/03/28 05:59:20 [INFO] consul: adding WAN server Node 15459.dc1 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/28 05:59:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:20 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:21 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/03/28 05:59:21 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/28 05:59:21 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/28 05:59:21 [INFO] serf: EventMemberJoin: Node 15463.dc1 127.0.0.1
2016/03/28 05:59:21 [INFO] consul: adding WAN server Node 15463.dc1 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/28 05:59:21 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:22 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:22 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:22 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:22 [INFO] raft: Node at 127.0.0.1:15460 [Leader] entering Leader state
2016/03/28 05:59:22 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:22 [INFO] consul: New leader elected: Node 15459
2016/03/28 05:59:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:22 [DEBUG] raft: Node 127.0.0.1:15460 updated peer set (2): [127.0.0.1:15460]
2016/03/28 05:59:22 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:22 [INFO] consul: member 'Node 15459' joined, marking health alive
2016/03/28 05:59:22 [INFO] raft: Node at 127.0.0.1:15468 [Follower] entering Follower state
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15467.dc1 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding WAN server Node 15467.dc1 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/28 05:59:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15461
2016/03/28 05:59:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:42166
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/28 05:59:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15461
2016/03/28 05:59:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:42167
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/28 05:59:22 [DEBUG] raft: Node 127.0.0.1:15460 updated peer set (2): [127.0.0.1:15464 127.0.0.1:15460]
2016/03/28 05:59:22 [INFO] raft: Added peer 127.0.0.1:15464, starting replication
2016/03/28 05:59:22 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:53331
2016/03/28 05:59:22 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:22 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/28 05:59:22 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/28 05:59:22 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:22 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:53332
2016/03/28 05:59:22 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] memberlist: Potential blocking operation. Last command took 12.868333ms
2016/03/28 05:59:23 [DEBUG] memberlist: Potential blocking operation. Last command took 12.448333ms
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15467
2016/03/28 05:59:23 [DEBUG] serf: messageJoinType: Node 15463
2016/03/28 05:59:23 [DEBUG] raft: Failed to contact 127.0.0.1:15464 in 482.790666ms
2016/03/28 05:59:23 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:59:23 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:59:23 [INFO] raft: Node at 127.0.0.1:15460 [Follower] entering Follower state
2016/03/28 05:59:23 [INFO] consul: cluster leadership lost
2016/03/28 05:59:23 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:59:23 [WARN] raft: AppendEntries to 127.0.0.1:15464 rejected, sending older logs (next: 1)
2016/03/28 05:59:23 [ERR] consul: failed to reconcile member: {Node 15463 127.0.0.1 15465 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15464 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:59:23 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:59:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:23 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:23 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:53333
2016/03/28 05:59:24 [DEBUG] raft: Node 127.0.0.1:15464 updated peer set (2): [127.0.0.1:15460]
2016/03/28 05:59:24 [INFO] raft: pipelining replication to peer 127.0.0.1:15464
2016/03/28 05:59:24 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15464
2016/03/28 05:59:24 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:24 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:24 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:25 [DEBUG] memberlist: Potential blocking operation. Last command took 14.263ms
2016/03/28 05:59:26 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:26 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:26 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:26 [DEBUG] memberlist: Potential blocking operation. Last command took 14.487666ms
2016/03/28 05:59:26 [DEBUG] memberlist: Potential blocking operation. Last command took 14.001ms
2016/03/28 05:59:26 [DEBUG] memberlist: Potential blocking operation. Last command took 10.249334ms
2016/03/28 05:59:27 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:27 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:27 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:27 [DEBUG] memberlist: Potential blocking operation. Last command took 10.496333ms
2016/03/28 05:59:27 [DEBUG] memberlist: Potential blocking operation. Last command took 15.862ms
2016/03/28 05:59:28 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:28 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:28 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:28 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:28 [DEBUG] memberlist: Potential blocking operation. Last command took 10.802334ms
2016/03/28 05:59:28 [DEBUG] memberlist: Potential blocking operation. Last command took 12.582ms
2016/03/28 05:59:29 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:29 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:29 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:29 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:29 [DEBUG] memberlist: Potential blocking operation. Last command took 13.338667ms
2016/03/28 05:59:30 [DEBUG] memberlist: Potential blocking operation. Last command took 23.806667ms
2016/03/28 05:59:30 [DEBUG] memberlist: Potential blocking operation. Last command took 12.013ms
2016/03/28 05:59:30 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:30 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:30 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:30 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:30 [DEBUG] memberlist: Potential blocking operation. Last command took 12.081ms
2016/03/28 05:59:31 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:31 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:31 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:31 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:32 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:32 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:32 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:32 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:33 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:33 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:33 [INFO] consul: shutting down server
2016/03/28 05:59:33 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:33 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:33 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:33 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/28 05:59:33 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:33 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/28 05:59:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:33 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/28 05:59:33 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/28 05:59:33 [INFO] consul: shutting down server
2016/03/28 05:59:33 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:33 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/28 05:59:33 [INFO] memberlist: Marking Node 15467 as failed, suspect timeout reached
2016/03/28 05:59:33 [INFO] serf: EventMemberFailed: Node 15467 127.0.0.1
2016/03/28 05:59:33 [INFO] memberlist: Marking Node 15467 as failed, suspect timeout reached
2016/03/28 05:59:33 [INFO] serf: EventMemberFailed: Node 15467 127.0.0.1
2016/03/28 05:59:33 [INFO] consul: removing LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/28 05:59:33 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/28 05:59:34 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/28 05:59:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:34 [DEBUG] raft-net: 127.0.0.1:15460 accepted connection from: 127.0.0.1:36097
2016/03/28 05:59:34 [DEBUG] memberlist: Failed UDP ping: Node 15463 (timeout reached)
2016/03/28 05:59:34 [INFO] memberlist: Suspect Node 15463 has failed, no acks received
2016/03/28 05:59:34 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:59:34 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15464: EOF
2016/03/28 05:59:34 [DEBUG] memberlist: Failed UDP ping: Node 15463 (timeout reached)
2016/03/28 05:59:34 [INFO] memberlist: Suspect Node 15463 has failed, no acks received
2016/03/28 05:59:34 [INFO] memberlist: Marking Node 15463 as failed, suspect timeout reached
2016/03/28 05:59:34 [INFO] serf: EventMemberFailed: Node 15463 127.0.0.1
2016/03/28 05:59:34 [INFO] consul: removing LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/28 05:59:34 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:34 [DEBUG] memberlist: Failed UDP ping: Node 15463 (timeout reached)
2016/03/28 05:59:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:59:34 [DEBUG] raft: Vote granted from 127.0.0.1:15460. Tally: 1
2016/03/28 05:59:34 [INFO] memberlist: Suspect Node 15463 has failed, no acks received
2016/03/28 05:59:34 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:34 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/28 05:59:34 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:34 [INFO] consul: shutting down server
2016/03/28 05:59:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:34 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:34 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:59:34 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15464: EOF
2016/03/28 05:59:35 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftServer (14.78s)
	leader_test.go:347: should have 3 peers
=== RUN   TestLeader_LeftLeader
2016/03/28 05:59:35 [INFO] raft: Node at 127.0.0.1:15472 [Follower] entering Follower state
2016/03/28 05:59:35 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/28 05:59:35 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/28 05:59:35 [INFO] serf: EventMemberJoin: Node 15471.dc1 127.0.0.1
2016/03/28 05:59:35 [INFO] consul: adding WAN server Node 15471.dc1 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/28 05:59:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:35 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:36 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/03/28 05:59:36 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/28 05:59:36 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/28 05:59:36 [INFO] serf: EventMemberJoin: Node 15475.dc1 127.0.0.1
2016/03/28 05:59:36 [INFO] consul: adding WAN server Node 15475.dc1 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/28 05:59:36 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:36 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:36 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:36 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:36 [INFO] raft: Node at 127.0.0.1:15472 [Leader] entering Leader state
2016/03/28 05:59:36 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:36 [INFO] consul: New leader elected: Node 15471
2016/03/28 05:59:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:37 [DEBUG] raft: Node 127.0.0.1:15472 updated peer set (2): [127.0.0.1:15472]
2016/03/28 05:59:37 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:37 [INFO] consul: member 'Node 15471' joined, marking health alive
2016/03/28 05:59:37 [INFO] raft: Node at 127.0.0.1:15480 [Follower] entering Follower state
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15479.dc1 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding WAN server Node 15479.dc1 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/28 05:59:37 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15473
2016/03/28 05:59:37 [DEBUG] memberlist: TCP connection from=127.0.0.1:42757
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/28 05:59:37 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15473
2016/03/28 05:59:37 [DEBUG] memberlist: TCP connection from=127.0.0.1:42758
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/28 05:59:37 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/28 05:59:37 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/28 05:59:38 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/28 05:59:38 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/28 05:59:38 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:38 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:38 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15475
2016/03/28 05:59:38 [DEBUG] serf: messageJoinType: Node 15479
2016/03/28 05:59:38 [DEBUG] raft: Node 127.0.0.1:15472 updated peer set (2): [127.0.0.1:15476 127.0.0.1:15472]
2016/03/28 05:59:38 [INFO] raft: Added peer 127.0.0.1:15476, starting replication
2016/03/28 05:59:38 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:33916
2016/03/28 05:59:38 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:33917
2016/03/28 05:59:38 [DEBUG] raft: Failed to contact 127.0.0.1:15476 in 243.203667ms
2016/03/28 05:59:38 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/03/28 05:59:38 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:59:38 [INFO] raft: Node at 127.0.0.1:15472 [Follower] entering Follower state
2016/03/28 05:59:38 [INFO] consul: cluster leadership lost
2016/03/28 05:59:38 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:59:38 [ERR] consul: failed to reconcile member: {Node 15475 127.0.0.1 15477 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15476 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:59:38 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 05:59:38 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 05:59:38 [WARN] raft: AppendEntries to 127.0.0.1:15476 rejected, sending older logs (next: 1)
2016/03/28 05:59:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:38 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:38 [DEBUG] memberlist: Potential blocking operation. Last command took 21.411666ms
2016/03/28 05:59:38 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:33918
2016/03/28 05:59:39 [DEBUG] memberlist: Potential blocking operation. Last command took 10.507ms
2016/03/28 05:59:39 [DEBUG] memberlist: Potential blocking operation. Last command took 13.815ms
2016/03/28 05:59:39 [DEBUG] raft: Node 127.0.0.1:15476 updated peer set (2): [127.0.0.1:15472]
2016/03/28 05:59:39 [INFO] raft: pipelining replication to peer 127.0.0.1:15476
2016/03/28 05:59:39 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15476
2016/03/28 05:59:40 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:40 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:40 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:40 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:41 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:41 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:41 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:41 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:41 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:41 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:41 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:41 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:42 [DEBUG] memberlist: Potential blocking operation. Last command took 10.626667ms
2016/03/28 05:59:42 [DEBUG] memberlist: Potential blocking operation. Last command took 13.052333ms
2016/03/28 05:59:42 [DEBUG] memberlist: Potential blocking operation. Last command took 22.92ms
2016/03/28 05:59:42 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:42 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:42 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:42 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:43 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:43 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:43 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:43 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:43 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:43 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:44 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:44 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:44 [DEBUG] memberlist: Potential blocking operation. Last command took 12.001333ms
2016/03/28 05:59:44 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:44 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:44 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:44 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:45 [DEBUG] memberlist: Potential blocking operation. Last command took 10.037333ms
2016/03/28 05:59:45 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:45 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:45 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:45 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:46 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:46 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:46 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:46 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:46 [DEBUG] memberlist: Potential blocking operation. Last command took 12.241666ms
2016/03/28 05:59:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:46 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/28 05:59:46 [DEBUG] memberlist: Potential blocking operation. Last command took 11.116667ms
2016/03/28 05:59:46 [DEBUG] raft-net: 127.0.0.1:15472 accepted connection from: 127.0.0.1:35017
2016/03/28 05:59:46 [DEBUG] memberlist: Potential blocking operation. Last command took 11.249667ms
2016/03/28 05:59:47 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:47 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:47 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:59:47 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:47 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:47 [DEBUG] memberlist: Potential blocking operation. Last command took 10.914ms
2016/03/28 05:59:47 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:47 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 05:59:47 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/03/28 05:59:47 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:47 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/28 05:59:48 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:48 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:48 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:59:48 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:48 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:48 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:48 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 05:59:48 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/03/28 05:59:48 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:48 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/28 05:59:48 [INFO] consul: shutting down server
2016/03/28 05:59:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:48 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/03/28 05:59:48 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/03/28 05:59:48 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/03/28 05:59:48 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/03/28 05:59:48 [INFO] consul: shutting down server
2016/03/28 05:59:48 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:49 [INFO] memberlist: Marking Node 15479 as failed, suspect timeout reached
2016/03/28 05:59:49 [INFO] serf: EventMemberFailed: Node 15479 127.0.0.1
2016/03/28 05:59:49 [INFO] consul: removing LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/28 05:59:49 [DEBUG] memberlist: Failed UDP ping: Node 15475 (timeout reached)
2016/03/28 05:59:49 [INFO] memberlist: Marking Node 15479 as failed, suspect timeout reached
2016/03/28 05:59:49 [INFO] serf: EventMemberFailed: Node 15479 127.0.0.1
2016/03/28 05:59:49 [INFO] memberlist: Suspect Node 15475 has failed, no acks received
2016/03/28 05:59:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:49 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:49 [DEBUG] raft: Vote granted from 127.0.0.1:15472. Tally: 1
2016/03/28 05:59:49 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 05:59:49 [DEBUG] memberlist: Failed UDP ping: Node 15475 (timeout reached)
2016/03/28 05:59:49 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:49 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/28 05:59:49 [INFO] memberlist: Suspect Node 15475 has failed, no acks received
2016/03/28 05:59:49 [INFO] memberlist: Marking Node 15475 as failed, suspect timeout reached
2016/03/28 05:59:49 [INFO] serf: EventMemberFailed: Node 15475 127.0.0.1
2016/03/28 05:59:49 [INFO] consul: removing LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/28 05:59:49 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:59:49 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:49 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15476: EOF
2016/03/28 05:59:49 [INFO] consul: shutting down server
2016/03/28 05:59:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:49 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:49 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 05:59:49 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15476: EOF
2016/03/28 05:59:50 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftLeader (14.95s)
	leader_test.go:400: should have 3 peers
=== RUN   TestLeader_MultiBootstrap
2016/03/28 05:59:50 [INFO] raft: Node at 127.0.0.1:15484 [Follower] entering Follower state
2016/03/28 05:59:50 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/03/28 05:59:50 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/28 05:59:50 [INFO] serf: EventMemberJoin: Node 15483.dc1 127.0.0.1
2016/03/28 05:59:50 [INFO] consul: adding WAN server Node 15483.dc1 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/28 05:59:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:50 [INFO] raft: Node at 127.0.0.1:15484 [Candidate] entering Candidate state
2016/03/28 05:59:51 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:51 [DEBUG] raft: Vote granted from 127.0.0.1:15484. Tally: 1
2016/03/28 05:59:51 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:51 [INFO] raft: Node at 127.0.0.1:15484 [Leader] entering Leader state
2016/03/28 05:59:51 [INFO] raft: Node at 127.0.0.1:15488 [Follower] entering Follower state
2016/03/28 05:59:51 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:51 [INFO] consul: New leader elected: Node 15483
2016/03/28 05:59:51 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/03/28 05:59:51 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/28 05:59:51 [INFO] serf: EventMemberJoin: Node 15487.dc1 127.0.0.1
2016/03/28 05:59:51 [INFO] consul: adding WAN server Node 15487.dc1 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/28 05:59:51 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15485
2016/03/28 05:59:51 [DEBUG] memberlist: TCP connection from=127.0.0.1:53522
2016/03/28 05:59:51 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/03/28 05:59:51 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/28 05:59:51 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/03/28 05:59:51 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/28 05:59:51 [INFO] consul: shutting down server
2016/03/28 05:59:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:51 [INFO] raft: Node at 127.0.0.1:15488 [Candidate] entering Candidate state
2016/03/28 05:59:51 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:51 [DEBUG] memberlist: Failed UDP ping: Node 15487 (timeout reached)
2016/03/28 05:59:51 [INFO] memberlist: Suspect Node 15487 has failed, no acks received
2016/03/28 05:59:51 [DEBUG] memberlist: Failed UDP ping: Node 15487 (timeout reached)
2016/03/28 05:59:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:51 [INFO] memberlist: Suspect Node 15487 has failed, no acks received
2016/03/28 05:59:51 [INFO] memberlist: Marking Node 15487 as failed, suspect timeout reached
2016/03/28 05:59:51 [INFO] serf: EventMemberFailed: Node 15487 127.0.0.1
2016/03/28 05:59:51 [INFO] consul: removing LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/28 05:59:51 [DEBUG] memberlist: Failed UDP ping: Node 15487 (timeout reached)
2016/03/28 05:59:52 [INFO] memberlist: Suspect Node 15487 has failed, no acks received
2016/03/28 05:59:52 [DEBUG] raft: Node 127.0.0.1:15484 updated peer set (2): [127.0.0.1:15484]
2016/03/28 05:59:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:52 [INFO] consul: member 'Node 15483' joined, marking health alive
2016/03/28 05:59:52 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:52 [INFO] consul: member 'Node 15487' failed, marking health critical
2016/03/28 05:59:52 [INFO] consul: shutting down server
2016/03/28 05:59:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:52 [WARN] serf: Shutdown without a Leave
2016/03/28 05:59:52 [ERR] consul.catalog: Register failed: raft is already shutdown
2016/03/28 05:59:52 [ERR] consul: failed to reconcile member: {Node 15487 127.0.0.1 15489 map[build: port:15488 bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] failed 1 3 2 2 4 4}: raft is already shutdown
2016/03/28 05:59:52 [ERR] consul: failed to reconcile: raft is already shutdown
2016/03/28 05:59:52 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_MultiBootstrap (2.37s)
=== RUN   TestLeader_TombstoneGC_Reset
2016/03/28 05:59:52 [INFO] raft: Node at 127.0.0.1:15492 [Follower] entering Follower state
2016/03/28 05:59:52 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/28 05:59:52 [INFO] serf: EventMemberJoin: Node 15491.dc1 127.0.0.1
2016/03/28 05:59:52 [INFO] consul: adding WAN server Node 15491.dc1 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/28 05:59:52 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/28 05:59:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:53 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:53 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/28 05:59:53 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/28 05:59:53 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/28 05:59:53 [INFO] serf: EventMemberJoin: Node 15495.dc1 127.0.0.1
2016/03/28 05:59:53 [INFO] consul: adding WAN server Node 15495.dc1 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/28 05:59:53 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:53 [DEBUG] raft: Votes needed: 1
2016/03/28 05:59:53 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 05:59:53 [INFO] raft: Election won. Tally: 1
2016/03/28 05:59:53 [INFO] raft: Node at 127.0.0.1:15492 [Leader] entering Leader state
2016/03/28 05:59:53 [INFO] consul: cluster leadership acquired
2016/03/28 05:59:53 [INFO] consul: New leader elected: Node 15491
2016/03/28 05:59:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 05:59:54 [DEBUG] raft: Node 127.0.0.1:15492 updated peer set (2): [127.0.0.1:15492]
2016/03/28 05:59:54 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 05:59:54 [INFO] consul: member 'Node 15491' joined, marking health alive
2016/03/28 05:59:54 [INFO] raft: Node at 127.0.0.1:15500 [Follower] entering Follower state
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15499.dc1 127.0.0.1
2016/03/28 05:59:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15493
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/28 05:59:54 [INFO] consul: adding WAN server Node 15499.dc1 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/28 05:59:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:37477
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/28 05:59:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15493
2016/03/28 05:59:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:37478
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/28 05:59:54 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 05:59:54 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/28 05:59:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:54 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:54 [DEBUG] raft: Node 127.0.0.1:15492 updated peer set (2): [127.0.0.1:15496 127.0.0.1:15492]
2016/03/28 05:59:54 [INFO] raft: Added peer 127.0.0.1:15496, starting replication
2016/03/28 05:59:54 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:56752
2016/03/28 05:59:54 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:56753
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] memberlist: Potential blocking operation. Last command took 13.572333ms
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:54 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15495
2016/03/28 05:59:55 [DEBUG] serf: messageJoinType: Node 15499
2016/03/28 05:59:55 [DEBUG] raft: Failed to contact 127.0.0.1:15496 in 209.285ms
2016/03/28 05:59:55 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 05:59:55 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 05:59:55 [INFO] raft: Node at 127.0.0.1:15492 [Follower] entering Follower state
2016/03/28 05:59:55 [INFO] consul: cluster leadership lost
2016/03/28 05:59:55 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 05:59:55 [WARN] raft: AppendEntries to 127.0.0.1:15496 rejected, sending older logs (next: 1)
2016/03/28 05:59:55 [ERR] consul: failed to reconcile member: {Node 15495 127.0.0.1 15497 map[build: port:15496 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 05:59:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 05:59:55 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:55 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:56754
2016/03/28 05:59:55 [DEBUG] raft: Node 127.0.0.1:15496 updated peer set (2): [127.0.0.1:15492]
2016/03/28 05:59:55 [INFO] raft: pipelining replication to peer 127.0.0.1:15496
2016/03/28 05:59:55 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15496
2016/03/28 05:59:55 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:55 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 05:59:55 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:55 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:55 [DEBUG] memberlist: Potential blocking operation. Last command took 13.633ms
2016/03/28 05:59:56 [DEBUG] memberlist: Potential blocking operation. Last command took 13.700333ms
2016/03/28 05:59:56 [DEBUG] memberlist: Potential blocking operation. Last command took 17.342ms
2016/03/28 05:59:57 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:57 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 05:59:57 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:57 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:58 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:58 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 05:59:58 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:58 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:58 [DEBUG] memberlist: Potential blocking operation. Last command took 10.091333ms
2016/03/28 05:59:58 [DEBUG] memberlist: Potential blocking operation. Last command took 11.372ms
2016/03/28 05:59:59 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:59 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 05:59:59 [WARN] raft: Election timeout reached, restarting election
2016/03/28 05:59:59 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 05:59:59 [DEBUG] memberlist: Potential blocking operation. Last command took 11.040667ms
2016/03/28 05:59:59 [DEBUG] raft: Votes needed: 2
2016/03/28 05:59:59 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:00 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:00 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:00 [DEBUG] memberlist: Potential blocking operation. Last command took 11.184ms
2016/03/28 06:00:00 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:00 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:00 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:00 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:01 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:01 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:01 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:01 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:01 [DEBUG] memberlist: Potential blocking operation. Last command took 10.634ms
2016/03/28 06:00:02 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:02 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:02 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:02 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:02 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/28 06:00:02 [DEBUG] raft-net: 127.0.0.1:15492 accepted connection from: 127.0.0.1:34462
2016/03/28 06:00:02 [DEBUG] memberlist: Potential blocking operation. Last command took 11.451ms
2016/03/28 06:00:03 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:03 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:03 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:00:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:03 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:03 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:03 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:00:03 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/03/28 06:00:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:03 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/28 06:00:03 [DEBUG] memberlist: Potential blocking operation. Last command took 12.404333ms
2016/03/28 06:00:03 [DEBUG] memberlist: Potential blocking operation. Last command took 14.137ms
2016/03/28 06:00:04 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:04 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:04 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:00:04 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:04 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:04 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:04 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:00:04 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/03/28 06:00:04 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:04 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/28 06:00:05 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:05 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:00:05 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:05 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:05 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:05 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:05 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:00:05 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/03/28 06:00:05 [INFO] consul: shutting down server
2016/03/28 06:00:05 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:05 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:05 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/28 06:00:05 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/28 06:00:05 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:05 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/28 06:00:05 [INFO] consul: shutting down server
2016/03/28 06:00:05 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:05 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/28 06:00:05 [ERR] memberlist: Failed to send indirect ping: write udp 127.0.0.1:15497->127.0.0.1:15493: use of closed network connection
2016/03/28 06:00:05 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/28 06:00:05 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/28 06:00:05 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:05 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/28 06:00:05 [INFO] memberlist: Marking Node 15499 as failed, suspect timeout reached
2016/03/28 06:00:05 [INFO] serf: EventMemberFailed: Node 15499 127.0.0.1
2016/03/28 06:00:05 [DEBUG] memberlist: Failed UDP ping: Node 15495 (timeout reached)
2016/03/28 06:00:05 [INFO] memberlist: Suspect Node 15495 has failed, no acks received
2016/03/28 06:00:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:00:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15496: EOF
2016/03/28 06:00:06 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/28 06:00:06 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:06 [DEBUG] raft: Vote granted from 127.0.0.1:15492. Tally: 1
2016/03/28 06:00:06 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 06:00:06 [INFO] memberlist: Marking Node 15499 as failed, suspect timeout reached
2016/03/28 06:00:06 [INFO] serf: EventMemberFailed: Node 15499 127.0.0.1
2016/03/28 06:00:06 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/28 06:00:06 [INFO] consul: removing LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/28 06:00:06 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:00:06 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/28 06:00:06 [INFO] memberlist: Marking Node 15495 as failed, suspect timeout reached
2016/03/28 06:00:06 [INFO] serf: EventMemberFailed: Node 15495 127.0.0.1
2016/03/28 06:00:06 [INFO] consul: removing LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/28 06:00:06 [DEBUG] memberlist: Failed UDP ping: Node 15495 (timeout reached)
2016/03/28 06:00:06 [INFO] memberlist: Suspect Node 15495 has failed, no acks received
2016/03/28 06:00:06 [DEBUG] raft: Votes needed: 2
2016/03/28 06:00:06 [INFO] consul: shutting down server
2016/03/28 06:00:06 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:06 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:00:06 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15496: EOF
2016/03/28 06:00:06 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:07 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_TombstoneGC_Reset (14.95s)
	leader_test.go:511: should have 3 peers
=== RUN   TestLeader_ReapTombstones
2016/03/28 06:00:07 [INFO] raft: Node at 127.0.0.1:15504 [Follower] entering Follower state
2016/03/28 06:00:07 [INFO] serf: EventMemberJoin: Node 15503 127.0.0.1
2016/03/28 06:00:07 [INFO] consul: adding LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/03/28 06:00:07 [INFO] serf: EventMemberJoin: Node 15503.dc1 127.0.0.1
2016/03/28 06:00:07 [INFO] consul: adding WAN server Node 15503.dc1 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/03/28 06:00:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:08 [INFO] raft: Node at 127.0.0.1:15504 [Candidate] entering Candidate state
2016/03/28 06:00:09 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:09 [DEBUG] raft: Vote granted from 127.0.0.1:15504. Tally: 1
2016/03/28 06:00:09 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:09 [INFO] raft: Node at 127.0.0.1:15504 [Leader] entering Leader state
2016/03/28 06:00:09 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:09 [INFO] consul: New leader elected: Node 15503
2016/03/28 06:00:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:09 [DEBUG] raft: Node 127.0.0.1:15504 updated peer set (2): [127.0.0.1:15504]
2016/03/28 06:00:09 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:09 [INFO] consul: member 'Node 15503' joined, marking health alive
2016/03/28 06:00:11 [INFO] consul: shutting down server
2016/03/28 06:00:11 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:11 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:11 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:00:11 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_ReapTombstones (4.25s)
=== RUN   TestPreparedQuery_Apply
2016/03/28 06:00:12 [INFO] raft: Node at 127.0.0.1:15508 [Follower] entering Follower state
2016/03/28 06:00:12 [INFO] serf: EventMemberJoin: Node 15507 127.0.0.1
2016/03/28 06:00:12 [INFO] consul: adding LAN server Node 15507 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/03/28 06:00:12 [INFO] serf: EventMemberJoin: Node 15507.dc1 127.0.0.1
2016/03/28 06:00:12 [INFO] consul: adding WAN server Node 15507.dc1 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/03/28 06:00:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:12 [INFO] raft: Node at 127.0.0.1:15508 [Candidate] entering Candidate state
2016/03/28 06:00:12 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:12 [DEBUG] raft: Vote granted from 127.0.0.1:15508. Tally: 1
2016/03/28 06:00:12 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:12 [INFO] raft: Node at 127.0.0.1:15508 [Leader] entering Leader state
2016/03/28 06:00:12 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:12 [INFO] consul: New leader elected: Node 15507
2016/03/28 06:00:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:13 [DEBUG] raft: Node 127.0.0.1:15508 updated peer set (2): [127.0.0.1:15508]
2016/03/28 06:00:13 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:13 [INFO] consul: member 'Node 15507' joined, marking health alive
2016/03/28 06:00:16 [INFO] consul: shutting down server
2016/03/28 06:00:16 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:16 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:16 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:00:16 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Apply (4.91s)
=== RUN   TestPreparedQuery_Apply_ACLDeny
2016/03/28 06:00:17 [INFO] raft: Node at 127.0.0.1:15512 [Follower] entering Follower state
2016/03/28 06:00:17 [INFO] serf: EventMemberJoin: Node 15511 127.0.0.1
2016/03/28 06:00:17 [INFO] consul: adding LAN server Node 15511 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/03/28 06:00:17 [INFO] serf: EventMemberJoin: Node 15511.dc1 127.0.0.1
2016/03/28 06:00:17 [INFO] consul: adding WAN server Node 15511.dc1 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/03/28 06:00:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:17 [INFO] raft: Node at 127.0.0.1:15512 [Candidate] entering Candidate state
2016/03/28 06:00:17 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:17 [DEBUG] raft: Vote granted from 127.0.0.1:15512. Tally: 1
2016/03/28 06:00:17 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:17 [INFO] raft: Node at 127.0.0.1:15512 [Leader] entering Leader state
2016/03/28 06:00:17 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:17 [INFO] consul: New leader elected: Node 15511
2016/03/28 06:00:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:17 [DEBUG] raft: Node 127.0.0.1:15512 updated peer set (2): [127.0.0.1:15512]
2016/03/28 06:00:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:18 [INFO] consul: member 'Node 15511' joined, marking health alive
2016/03/28 06:00:19 [WARN] consul.prepared_query: Operation on prepared query for service 'redis' denied due to ACLs
2016/03/28 06:00:19 [WARN] consul.prepared_query: Operation on prepared query '0b64bf54-631e-d6a8-7470-a108adc33d4a' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:19 [WARN] consul.prepared_query: Operation on prepared query '0b64bf54-631e-d6a8-7470-a108adc33d4a' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:20 [WARN] consul.prepared_query: Operation on prepared query '0b64bf54-631e-d6a8-7470-a108adc33d4a' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:20 [WARN] consul.prepared_query: Operation on prepared query '0b64bf54-631e-d6a8-7470-a108adc33d4a' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:21 [INFO] consul: shutting down server
2016/03/28 06:00:21 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:21 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:22 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:00:22 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Apply_ACLDeny (5.52s)
=== RUN   TestPreparedQuery_Apply_ForwardLeader
2016/03/28 06:00:22 [INFO] raft: Node at 127.0.0.1:15516 [Follower] entering Follower state
2016/03/28 06:00:22 [INFO] serf: EventMemberJoin: Node 15515 127.0.0.1
2016/03/28 06:00:22 [INFO] consul: adding LAN server Node 15515 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/28 06:00:22 [INFO] serf: EventMemberJoin: Node 15515.dc1 127.0.0.1
2016/03/28 06:00:22 [INFO] consul: adding WAN server Node 15515.dc1 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/28 06:00:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:22 [INFO] raft: Node at 127.0.0.1:15516 [Candidate] entering Candidate state
2016/03/28 06:00:23 [INFO] raft: Node at 127.0.0.1:15520 [Follower] entering Follower state
2016/03/28 06:00:23 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/03/28 06:00:23 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/28 06:00:23 [INFO] serf: EventMemberJoin: Node 15519.dc1 127.0.0.1
2016/03/28 06:00:23 [INFO] consul: adding WAN server Node 15519.dc1 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/28 06:00:23 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15517
2016/03/28 06:00:23 [DEBUG] memberlist: TCP connection from=127.0.0.1:48763
2016/03/28 06:00:23 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/03/28 06:00:23 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/28 06:00:23 [INFO] serf: EventMemberJoin: Node 15515 127.0.0.1
2016/03/28 06:00:23 [INFO] consul: adding LAN server Node 15515 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/28 06:00:23 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:23 [DEBUG] raft: Vote granted from 127.0.0.1:15516. Tally: 1
2016/03/28 06:00:23 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:23 [INFO] raft: Node at 127.0.0.1:15516 [Leader] entering Leader state
2016/03/28 06:00:23 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:23 [INFO] consul: New leader elected: Node 15515
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [INFO] consul: New leader elected: Node 15515
2016/03/28 06:00:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:23 [INFO] raft: Node at 127.0.0.1:15520 [Candidate] entering Candidate state
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:23 [DEBUG] serf: messageJoinType: Node 15519
2016/03/28 06:00:24 [DEBUG] raft: Node 127.0.0.1:15516 updated peer set (2): [127.0.0.1:15516]
2016/03/28 06:00:24 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:24 [INFO] consul: member 'Node 15515' joined, marking health alive
2016/03/28 06:00:24 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:24 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/03/28 06:00:24 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:24 [DEBUG] raft: Vote granted from 127.0.0.1:15520. Tally: 1
2016/03/28 06:00:24 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:24 [INFO] raft: Node at 127.0.0.1:15520 [Leader] entering Leader state
2016/03/28 06:00:24 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:24 [INFO] consul: New leader elected: Node 15519
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [INFO] consul: New leader elected: Node 15519
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:25 [DEBUG] raft: Node 127.0.0.1:15520 updated peer set (2): [127.0.0.1:15520]
2016/03/28 06:00:25 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:25 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:25 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/03/28 06:00:25 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:25 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:25 [INFO] consul: member 'Node 15515' joined, marking health alive
2016/03/28 06:00:26 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:26 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:27 [INFO] consul: shutting down server
2016/03/28 06:00:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:28 [DEBUG] memberlist: Failed UDP ping: Node 15519 (timeout reached)
2016/03/28 06:00:28 [INFO] memberlist: Suspect Node 15519 has failed, no acks received
2016/03/28 06:00:28 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:28 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:28 [DEBUG] memberlist: Failed UDP ping: Node 15519 (timeout reached)
2016/03/28 06:00:28 [INFO] memberlist: Suspect Node 15519 has failed, no acks received
2016/03/28 06:00:28 [INFO] memberlist: Marking Node 15519 as failed, suspect timeout reached
2016/03/28 06:00:28 [INFO] serf: EventMemberFailed: Node 15519 127.0.0.1
2016/03/28 06:00:28 [INFO] consul: removing LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/28 06:00:28 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:00:28 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:00:28 [INFO] consul: shutting down server
2016/03/28 06:00:28 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:28 [DEBUG] memberlist: Failed UDP ping: Node 15519 (timeout reached)
2016/03/28 06:00:28 [INFO] memberlist: Suspect Node 15519 has failed, no acks received
2016/03/28 06:00:28 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:28 [INFO] consul: member 'Node 15519' failed, marking health critical
2016/03/28 06:00:29 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/28 06:00:29 [ERR] consul: failed to reconcile member: {Node 15519 127.0.0.1 15521 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15520 bootstrap:1] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 06:00:29 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 06:00:29 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Apply_ForwardLeader (7.20s)
=== RUN   TestPreparedQuery_parseQuery
--- PASS: TestPreparedQuery_parseQuery (0.00s)
=== RUN   TestPreparedQuery_Get
2016/03/28 06:00:29 [INFO] raft: Node at 127.0.0.1:15524 [Follower] entering Follower state
2016/03/28 06:00:29 [INFO] serf: EventMemberJoin: Node 15523 127.0.0.1
2016/03/28 06:00:29 [INFO] consul: adding LAN server Node 15523 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/03/28 06:00:29 [INFO] serf: EventMemberJoin: Node 15523.dc1 127.0.0.1
2016/03/28 06:00:29 [INFO] consul: adding WAN server Node 15523.dc1 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/03/28 06:00:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:29 [INFO] raft: Node at 127.0.0.1:15524 [Candidate] entering Candidate state
2016/03/28 06:00:30 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:30 [DEBUG] raft: Vote granted from 127.0.0.1:15524. Tally: 1
2016/03/28 06:00:30 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:30 [INFO] raft: Node at 127.0.0.1:15524 [Leader] entering Leader state
2016/03/28 06:00:30 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:30 [INFO] consul: New leader elected: Node 15523
2016/03/28 06:00:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:30 [DEBUG] raft: Node 127.0.0.1:15524 updated peer set (2): [127.0.0.1:15524]
2016/03/28 06:00:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:30 [INFO] consul: member 'Node 15523' joined, marking health alive
2016/03/28 06:00:33 [WARN] consul.prepared_query: Request to get prepared query '64cef79c-ccf3-69a8-c248-d2e4eb34db68' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:33 [WARN] consul.prepared_query: Request to get prepared query '64cef79c-ccf3-69a8-c248-d2e4eb34db68' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/28 06:00:33 [INFO] consul: shutting down server
2016/03/28 06:00:33 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:33 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:33 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestPreparedQuery_Get (3.94s)
=== RUN   TestPreparedQuery_List
2016/03/28 06:00:33 [INFO] raft: Node at 127.0.0.1:15528 [Follower] entering Follower state
2016/03/28 06:00:33 [INFO] serf: EventMemberJoin: Node 15527 127.0.0.1
2016/03/28 06:00:33 [INFO] consul: adding LAN server Node 15527 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/03/28 06:00:33 [INFO] serf: EventMemberJoin: Node 15527.dc1 127.0.0.1
2016/03/28 06:00:33 [INFO] consul: adding WAN server Node 15527.dc1 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/03/28 06:00:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:33 [INFO] raft: Node at 127.0.0.1:15528 [Candidate] entering Candidate state
2016/03/28 06:00:34 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:34 [DEBUG] raft: Vote granted from 127.0.0.1:15528. Tally: 1
2016/03/28 06:00:34 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:34 [INFO] raft: Node at 127.0.0.1:15528 [Leader] entering Leader state
2016/03/28 06:00:34 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:34 [INFO] consul: New leader elected: Node 15527
2016/03/28 06:00:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:34 [DEBUG] raft: Node 127.0.0.1:15528 updated peer set (2): [127.0.0.1:15528]
2016/03/28 06:00:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:34 [INFO] consul: member 'Node 15527' joined, marking health alive
2016/03/28 06:00:36 [WARN] consul.prepared_query: Request to list prepared queries denied due to ACLs
2016/03/28 06:00:36 [WARN] consul.prepared_query: Request to list prepared queries denied due to ACLs
2016/03/28 06:00:36 [INFO] consul: shutting down server
2016/03/28 06:00:36 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:36 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:36 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestPreparedQuery_List (3.30s)
=== RUN   TestPreparedQuery_Execute
2016/03/28 06:00:37 [INFO] raft: Node at 127.0.0.1:15532 [Follower] entering Follower state
2016/03/28 06:00:37 [INFO] serf: EventMemberJoin: Node 15531 127.0.0.1
2016/03/28 06:00:37 [INFO] consul: adding LAN server Node 15531 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/28 06:00:37 [INFO] serf: EventMemberJoin: Node 15531.dc1 127.0.0.1
2016/03/28 06:00:37 [INFO] consul: adding WAN server Node 15531.dc1 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/28 06:00:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:37 [INFO] raft: Node at 127.0.0.1:15532 [Candidate] entering Candidate state
2016/03/28 06:00:38 [INFO] raft: Node at 127.0.0.1:15536 [Follower] entering Follower state
2016/03/28 06:00:38 [INFO] serf: EventMemberJoin: Node 15535 127.0.0.1
2016/03/28 06:00:38 [INFO] consul: adding LAN server Node 15535 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/28 06:00:38 [INFO] serf: EventMemberJoin: Node 15535.dc2 127.0.0.1
2016/03/28 06:00:38 [INFO] consul: adding WAN server Node 15535.dc2 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/28 06:00:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:38 [INFO] raft: Node at 127.0.0.1:15536 [Candidate] entering Candidate state
2016/03/28 06:00:38 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:38 [DEBUG] raft: Vote granted from 127.0.0.1:15532. Tally: 1
2016/03/28 06:00:38 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:38 [INFO] raft: Node at 127.0.0.1:15532 [Leader] entering Leader state
2016/03/28 06:00:38 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:38 [INFO] consul: New leader elected: Node 15531
2016/03/28 06:00:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:38 [DEBUG] raft: Node 127.0.0.1:15532 updated peer set (2): [127.0.0.1:15532]
2016/03/28 06:00:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:39 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:39 [DEBUG] raft: Vote granted from 127.0.0.1:15536. Tally: 1
2016/03/28 06:00:39 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:39 [INFO] raft: Node at 127.0.0.1:15536 [Leader] entering Leader state
2016/03/28 06:00:39 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:39 [INFO] consul: New leader elected: Node 15535
2016/03/28 06:00:40 [INFO] consul: member 'Node 15531' joined, marking health alive
2016/03/28 06:00:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:40 [DEBUG] raft: Node 127.0.0.1:15536 updated peer set (2): [127.0.0.1:15536]
2016/03/28 06:00:40 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:40 [INFO] consul: member 'Node 15535' joined, marking health alive
2016/03/28 06:00:41 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15534
2016/03/28 06:00:41 [DEBUG] memberlist: TCP connection from=127.0.0.1:50361
2016/03/28 06:00:41 [INFO] serf: EventMemberJoin: Node 15535.dc2 127.0.0.1
2016/03/28 06:00:41 [INFO] consul: adding WAN server Node 15535.dc2 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/28 06:00:41 [INFO] serf: EventMemberJoin: Node 15531.dc1 127.0.0.1
2016/03/28 06:00:41 [INFO] consul: adding WAN server Node 15531.dc1 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:41 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/28 06:00:44 [DEBUG] memberlist: Potential blocking operation. Last command took 16.391334ms
2016/03/28 06:00:45 [DEBUG] memberlist: Potential blocking operation. Last command took 11.490333ms
2016/03/28 06:00:47 [DEBUG] memberlist: Potential blocking operation. Last command took 10.243334ms
2016/03/28 06:00:47 [DEBUG] memberlist: Potential blocking operation. Last command took 10.372ms
2016/03/28 06:00:50 [DEBUG] memberlist: Potential blocking operation. Last command took 10.132ms
2016/03/28 06:00:53 [DEBUG] memberlist: Potential blocking operation. Last command took 11.370333ms
2016/03/28 06:00:54 [INFO] consul: shutting down server
2016/03/28 06:00:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:55 [INFO] consul: shutting down server
2016/03/28 06:00:55 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:55 [DEBUG] memberlist: Failed UDP ping: Node 15535.dc2 (timeout reached)
2016/03/28 06:00:55 [INFO] memberlist: Suspect Node 15535.dc2 has failed, no acks received
2016/03/28 06:00:55 [WARN] serf: Shutdown without a Leave
2016/03/28 06:00:55 [INFO] memberlist: Marking Node 15535.dc2 as failed, suspect timeout reached
2016/03/28 06:00:55 [INFO] serf: EventMemberFailed: Node 15535.dc2 127.0.0.1
2016/03/28 06:00:55 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:00:55 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
2016/03/28 06:00:55 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestPreparedQuery_Execute (19.19s)
	prepared_query_endpoint_test.go:1181: bad: {foo [{0x10d27980 0x10f9f940 []} {0x10d279e0 0x10f9fd40 []} {0x10d27a80 0x10f9fdc0 []} {0x10d27ae0 0x10f9f1c0 []} {0x10d27b60 0x10f9f200 []} {0x10d27c40 0x10f9f280 []} {0x10d27dc0 0x10f9f2c0 []} {0x10d27e60 0x10f9f300 []} {0x10d27ec0 0x10f9f340 []} {0x10d44000 0x10f9f3c0 []}] {10s} dc1 0 {0 0 true}}
=== RUN   TestPreparedQuery_Execute_ForwardLeader
2016/03/28 06:00:56 [INFO] raft: Node at 127.0.0.1:15540 [Follower] entering Follower state
2016/03/28 06:00:56 [INFO] serf: EventMemberJoin: Node 15539 127.0.0.1
2016/03/28 06:00:56 [INFO] consul: adding LAN server Node 15539 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/28 06:00:56 [INFO] serf: EventMemberJoin: Node 15539.dc1 127.0.0.1
2016/03/28 06:00:56 [INFO] consul: adding WAN server Node 15539.dc1 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/28 06:00:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:56 [INFO] raft: Node at 127.0.0.1:15540 [Candidate] entering Candidate state
2016/03/28 06:00:57 [INFO] raft: Node at 127.0.0.1:15544 [Follower] entering Follower state
2016/03/28 06:00:57 [INFO] serf: EventMemberJoin: Node 15543 127.0.0.1
2016/03/28 06:00:57 [INFO] consul: adding LAN server Node 15543 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/28 06:00:57 [INFO] serf: EventMemberJoin: Node 15543.dc1 127.0.0.1
2016/03/28 06:00:57 [INFO] consul: adding WAN server Node 15543.dc1 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/28 06:00:57 [DEBUG] memberlist: TCP connection from=127.0.0.1:58808
2016/03/28 06:00:57 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15541
2016/03/28 06:00:57 [INFO] serf: EventMemberJoin: Node 15543 127.0.0.1
2016/03/28 06:00:57 [INFO] consul: adding LAN server Node 15543 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/28 06:00:57 [INFO] serf: EventMemberJoin: Node 15539 127.0.0.1
2016/03/28 06:00:57 [INFO] consul: adding LAN server Node 15539 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/28 06:00:57 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:57 [DEBUG] raft: Vote granted from 127.0.0.1:15540. Tally: 1
2016/03/28 06:00:57 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:57 [INFO] raft: Node at 127.0.0.1:15540 [Leader] entering Leader state
2016/03/28 06:00:57 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:57 [INFO] consul: New leader elected: Node 15539
2016/03/28 06:00:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:00:57 [INFO] raft: Node at 127.0.0.1:15544 [Candidate] entering Candidate state
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [INFO] consul: New leader elected: Node 15539
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageJoinType: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:57 [DEBUG] raft: Node 127.0.0.1:15540 updated peer set (2): [127.0.0.1:15540]
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:57 [INFO] consul: member 'Node 15539' joined, marking health alive
2016/03/28 06:00:57 [DEBUG] raft: Votes needed: 1
2016/03/28 06:00:57 [DEBUG] raft: Vote granted from 127.0.0.1:15544. Tally: 1
2016/03/28 06:00:57 [INFO] raft: Election won. Tally: 1
2016/03/28 06:00:57 [INFO] raft: Node at 127.0.0.1:15544 [Leader] entering Leader state
2016/03/28 06:00:57 [INFO] consul: cluster leadership acquired
2016/03/28 06:00:57 [INFO] consul: New leader elected: Node 15543
2016/03/28 06:00:57 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:57 [INFO] consul: member 'Node 15543' joined, marking health alive
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [INFO] consul: New leader elected: Node 15543
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:00:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:00:58 [DEBUG] raft: Node 127.0.0.1:15544 updated peer set (2): [127.0.0.1:15544]
2016/03/28 06:00:58 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:58 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:00:58 [INFO] consul: member 'Node 15543' joined, marking health alive
2016/03/28 06:00:58 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:58 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:58 [INFO] consul: member 'Node 15539' joined, marking health alive
2016/03/28 06:00:58 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:00:59 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:01:00 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:01:00 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:01:00 [INFO] consul: shutting down server
2016/03/28 06:01:00 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:00 [DEBUG] memberlist: Failed UDP ping: Node 15543 (timeout reached)
2016/03/28 06:01:00 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:00 [INFO] memberlist: Suspect Node 15543 has failed, no acks received
2016/03/28 06:01:00 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:01:00 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/28 06:01:00 [INFO] consul: shutting down server
2016/03/28 06:01:00 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:00 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:00 [INFO] memberlist: Marking Node 15543 as failed, suspect timeout reached
2016/03/28 06:01:00 [INFO] serf: EventMemberFailed: Node 15543 127.0.0.1
2016/03/28 06:01:00 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestPreparedQuery_Execute_ForwardLeader (4.83s)
=== RUN   TestPreparedQuery_tagFilter
--- PASS: TestPreparedQuery_tagFilter (0.00s)
=== RUN   TestPreparedQuery_Wrapper
2016/03/28 06:01:01 [INFO] raft: Node at 127.0.0.1:15548 [Follower] entering Follower state
2016/03/28 06:01:01 [INFO] serf: EventMemberJoin: Node 15547 127.0.0.1
2016/03/28 06:01:01 [INFO] consul: adding LAN server Node 15547 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/28 06:01:01 [INFO] serf: EventMemberJoin: Node 15547.dc1 127.0.0.1
2016/03/28 06:01:01 [INFO] consul: adding WAN server Node 15547.dc1 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/28 06:01:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:01 [INFO] raft: Node at 127.0.0.1:15548 [Candidate] entering Candidate state
2016/03/28 06:01:01 [INFO] raft: Node at 127.0.0.1:15552 [Follower] entering Follower state
2016/03/28 06:01:01 [INFO] serf: EventMemberJoin: Node 15551 127.0.0.1
2016/03/28 06:01:01 [INFO] consul: adding LAN server Node 15551 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/28 06:01:01 [INFO] serf: EventMemberJoin: Node 15551.dc2 127.0.0.1
2016/03/28 06:01:01 [INFO] consul: adding WAN server Node 15551.dc2 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/28 06:01:01 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:01 [DEBUG] raft: Vote granted from 127.0.0.1:15548. Tally: 1
2016/03/28 06:01:01 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:01 [INFO] raft: Node at 127.0.0.1:15548 [Leader] entering Leader state
2016/03/28 06:01:01 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:01 [INFO] consul: New leader elected: Node 15547
2016/03/28 06:01:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:01 [INFO] raft: Node at 127.0.0.1:15552 [Candidate] entering Candidate state
2016/03/28 06:01:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:02 [DEBUG] raft: Node 127.0.0.1:15548 updated peer set (2): [127.0.0.1:15548]
2016/03/28 06:01:02 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:03 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:03 [DEBUG] raft: Vote granted from 127.0.0.1:15552. Tally: 1
2016/03/28 06:01:03 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:03 [INFO] raft: Node at 127.0.0.1:15552 [Leader] entering Leader state
2016/03/28 06:01:03 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:03 [INFO] consul: New leader elected: Node 15551
2016/03/28 06:01:03 [INFO] consul: member 'Node 15547' joined, marking health alive
2016/03/28 06:01:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:03 [DEBUG] raft: Node 127.0.0.1:15552 updated peer set (2): [127.0.0.1:15552]
2016/03/28 06:01:03 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:03 [INFO] consul: member 'Node 15551' joined, marking health alive
2016/03/28 06:01:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15550
2016/03/28 06:01:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:60448
2016/03/28 06:01:04 [INFO] serf: EventMemberJoin: Node 15551.dc2 127.0.0.1
2016/03/28 06:01:04 [INFO] serf: EventMemberJoin: Node 15547.dc1 127.0.0.1
2016/03/28 06:01:04 [INFO] consul: adding WAN server Node 15551.dc2 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/28 06:01:04 [INFO] consul: adding WAN server Node 15547.dc1 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/28 06:01:04 [DEBUG] Test
2016/03/28 06:01:04 [INFO] consul: shutting down server
2016/03/28 06:01:04 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:04 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/28 06:01:04 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/28 06:01:04 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/28 06:01:04 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:04 [DEBUG] memberlist: Failed UDP ping: Node 15551.dc2 (timeout reached)
2016/03/28 06:01:04 [INFO] memberlist: Suspect Node 15551.dc2 has failed, no acks received
2016/03/28 06:01:04 [INFO] consul: shutting down server
2016/03/28 06:01:04 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:04 [DEBUG] memberlist: Failed UDP ping: Node 15551.dc2 (timeout reached)
2016/03/28 06:01:04 [INFO] memberlist: Suspect Node 15551.dc2 has failed, no acks received
2016/03/28 06:01:04 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:04 [INFO] memberlist: Marking Node 15551.dc2 as failed, suspect timeout reached
2016/03/28 06:01:04 [INFO] serf: EventMemberFailed: Node 15551.dc2 127.0.0.1
2016/03/28 06:01:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestPreparedQuery_Wrapper (4.60s)
=== RUN   TestPreparedQuery_queryFailover
--- PASS: TestPreparedQuery_queryFailover (0.00s)
=== RUN   TestRTT_sortNodesByDistanceFrom
2016/03/28 06:01:05 [INFO] raft: Node at 127.0.0.1:15556 [Follower] entering Follower state
2016/03/28 06:01:05 [INFO] serf: EventMemberJoin: Node 15555 127.0.0.1
2016/03/28 06:01:05 [INFO] consul: adding LAN server Node 15555 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/03/28 06:01:05 [INFO] serf: EventMemberJoin: Node 15555.dc1 127.0.0.1
2016/03/28 06:01:05 [INFO] consul: adding WAN server Node 15555.dc1 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/03/28 06:01:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:05 [INFO] raft: Node at 127.0.0.1:15556 [Candidate] entering Candidate state
2016/03/28 06:01:06 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:06 [DEBUG] raft: Vote granted from 127.0.0.1:15556. Tally: 1
2016/03/28 06:01:06 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:06 [INFO] raft: Node at 127.0.0.1:15556 [Leader] entering Leader state
2016/03/28 06:01:06 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:06 [INFO] consul: New leader elected: Node 15555
2016/03/28 06:01:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:06 [DEBUG] raft: Node 127.0.0.1:15556 updated peer set (2): [127.0.0.1:15556]
2016/03/28 06:01:06 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:06 [INFO] consul: member 'Node 15555' joined, marking health alive
2016/03/28 06:01:09 [INFO] consul: shutting down server
2016/03/28 06:01:09 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:09 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:09 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:09 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:09 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom (4.89s)
	rtt_test.go:37: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_Nodes
2016/03/28 06:01:11 [INFO] raft: Node at 127.0.0.1:15560 [Follower] entering Follower state
2016/03/28 06:01:11 [INFO] serf: EventMemberJoin: Node 15559 127.0.0.1
2016/03/28 06:01:11 [INFO] consul: adding LAN server Node 15559 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/03/28 06:01:11 [INFO] serf: EventMemberJoin: Node 15559.dc1 127.0.0.1
2016/03/28 06:01:11 [INFO] consul: adding WAN server Node 15559.dc1 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/03/28 06:01:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:11 [INFO] raft: Node at 127.0.0.1:15560 [Candidate] entering Candidate state
2016/03/28 06:01:11 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:11 [DEBUG] raft: Vote granted from 127.0.0.1:15560. Tally: 1
2016/03/28 06:01:11 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:11 [INFO] raft: Node at 127.0.0.1:15560 [Leader] entering Leader state
2016/03/28 06:01:11 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:11 [INFO] consul: New leader elected: Node 15559
2016/03/28 06:01:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:12 [DEBUG] raft: Node 127.0.0.1:15560 updated peer set (2): [127.0.0.1:15560]
2016/03/28 06:01:12 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:12 [INFO] consul: member 'Node 15559' joined, marking health alive
2016/03/28 06:01:14 [INFO] consul: shutting down server
2016/03/28 06:01:14 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:14 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:14 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:14 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:14 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom_Nodes (4.89s)
	rtt_test.go:37: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_ServiceNodes
2016/03/28 06:01:15 [INFO] raft: Node at 127.0.0.1:15564 [Follower] entering Follower state
2016/03/28 06:01:15 [INFO] serf: EventMemberJoin: Node 15563 127.0.0.1
2016/03/28 06:01:15 [INFO] consul: adding LAN server Node 15563 (Addr: 127.0.0.1:15564) (DC: dc1)
2016/03/28 06:01:15 [INFO] serf: EventMemberJoin: Node 15563.dc1 127.0.0.1
2016/03/28 06:01:15 [INFO] consul: adding WAN server Node 15563.dc1 (Addr: 127.0.0.1:15564) (DC: dc1)
2016/03/28 06:01:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:15 [INFO] raft: Node at 127.0.0.1:15564 [Candidate] entering Candidate state
2016/03/28 06:01:15 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:15 [DEBUG] raft: Vote granted from 127.0.0.1:15564. Tally: 1
2016/03/28 06:01:15 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:15 [INFO] raft: Node at 127.0.0.1:15564 [Leader] entering Leader state
2016/03/28 06:01:15 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:15 [INFO] consul: New leader elected: Node 15563
2016/03/28 06:01:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:16 [DEBUG] raft: Node 127.0.0.1:15564 updated peer set (2): [127.0.0.1:15564]
2016/03/28 06:01:16 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:16 [INFO] consul: member 'Node 15563' joined, marking health alive
2016/03/28 06:01:18 [INFO] consul: shutting down server
2016/03/28 06:01:18 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:18 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:18 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:18 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom_ServiceNodes (4.10s)
	rtt_test.go:50: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_HealthChecks
2016/03/28 06:01:19 [INFO] raft: Node at 127.0.0.1:15568 [Follower] entering Follower state
2016/03/28 06:01:19 [INFO] serf: EventMemberJoin: Node 15567 127.0.0.1
2016/03/28 06:01:19 [INFO] consul: adding LAN server Node 15567 (Addr: 127.0.0.1:15568) (DC: dc1)
2016/03/28 06:01:19 [INFO] serf: EventMemberJoin: Node 15567.dc1 127.0.0.1
2016/03/28 06:01:19 [INFO] consul: adding WAN server Node 15567.dc1 (Addr: 127.0.0.1:15568) (DC: dc1)
2016/03/28 06:01:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:19 [INFO] raft: Node at 127.0.0.1:15568 [Candidate] entering Candidate state
2016/03/28 06:01:20 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:20 [DEBUG] raft: Vote granted from 127.0.0.1:15568. Tally: 1
2016/03/28 06:01:20 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:20 [INFO] raft: Node at 127.0.0.1:15568 [Leader] entering Leader state
2016/03/28 06:01:20 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:20 [INFO] consul: New leader elected: Node 15567
2016/03/28 06:01:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:20 [DEBUG] raft: Node 127.0.0.1:15568 updated peer set (2): [127.0.0.1:15568]
2016/03/28 06:01:20 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:20 [INFO] consul: member 'Node 15567' joined, marking health alive
2016/03/28 06:01:22 [INFO] consul: shutting down server
2016/03/28 06:01:22 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:22 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:23 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:01:23 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestRTT_sortNodesByDistanceFrom_HealthChecks (4.13s)
	rtt_test.go:63: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_CheckServiceNodes
2016/03/28 06:01:23 [INFO] raft: Node at 127.0.0.1:15572 [Follower] entering Follower state
2016/03/28 06:01:23 [INFO] serf: EventMemberJoin: Node 15571 127.0.0.1
2016/03/28 06:01:23 [INFO] consul: adding LAN server Node 15571 (Addr: 127.0.0.1:15572) (DC: dc1)
2016/03/28 06:01:23 [INFO] serf: EventMemberJoin: Node 15571.dc1 127.0.0.1
2016/03/28 06:01:23 [INFO] consul: adding WAN server Node 15571.dc1 (Addr: 127.0.0.1:15572) (DC: dc1)
2016/03/28 06:01:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:23 [INFO] raft: Node at 127.0.0.1:15572 [Candidate] entering Candidate state
2016/03/28 06:01:24 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:24 [DEBUG] raft: Vote granted from 127.0.0.1:15572. Tally: 1
2016/03/28 06:01:24 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:24 [INFO] raft: Node at 127.0.0.1:15572 [Leader] entering Leader state
2016/03/28 06:01:24 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:24 [INFO] consul: New leader elected: Node 15571
2016/03/28 06:01:24 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:24 [DEBUG] raft: Node 127.0.0.1:15572 updated peer set (2): [127.0.0.1:15572]
2016/03/28 06:01:24 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:24 [INFO] consul: member 'Node 15571' joined, marking health alive
2016/03/28 06:01:27 [INFO] consul: shutting down server
2016/03/28 06:01:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:27 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:01:27 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:27 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom_CheckServiceNodes (4.64s)
	rtt_test.go:76: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_getDatacenterDistance
--- PASS: TestRTT_getDatacenterDistance (0.00s)
=== RUN   TestRTT_sortDatacentersByDistance
--- PASS: TestRTT_sortDatacentersByDistance (0.00s)
=== RUN   TestRTT_getDatacenterMaps
--- PASS: TestRTT_getDatacenterMaps (0.00s)
=== RUN   TestRTT_getDatacentersByDistance
2016/03/28 06:01:28 [INFO] raft: Node at 127.0.0.1:15576 [Follower] entering Follower state
2016/03/28 06:01:28 [INFO] serf: EventMemberJoin: Node 15575 127.0.0.1
2016/03/28 06:01:28 [INFO] consul: adding LAN server Node 15575 (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/28 06:01:28 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/28 06:01:28 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/28 06:01:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:28 [INFO] raft: Node at 127.0.0.1:15576 [Candidate] entering Candidate state
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15580 [Follower] entering Follower state
2016/03/28 06:01:29 [INFO] serf: EventMemberJoin: Node 15579 127.0.0.1
2016/03/28 06:01:29 [INFO] consul: adding LAN server Node 15579 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/28 06:01:29 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/28 06:01:29 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/28 06:01:29 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:29 [DEBUG] raft: Vote granted from 127.0.0.1:15576. Tally: 1
2016/03/28 06:01:29 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15576 [Leader] entering Leader state
2016/03/28 06:01:29 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:29 [INFO] consul: New leader elected: Node 15575
2016/03/28 06:01:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15580 [Candidate] entering Candidate state
2016/03/28 06:01:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:29 [DEBUG] raft: Node 127.0.0.1:15576 updated peer set (2): [127.0.0.1:15576]
2016/03/28 06:01:29 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:29 [INFO] consul: member 'Node 15575' joined, marking health alive
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15584 [Follower] entering Follower state
2016/03/28 06:01:29 [INFO] serf: EventMemberJoin: Node 15583 127.0.0.1
2016/03/28 06:01:29 [INFO] consul: adding LAN server Node 15583 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/28 06:01:29 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/28 06:01:29 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/28 06:01:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15584 [Candidate] entering Candidate state
2016/03/28 06:01:29 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:29 [DEBUG] raft: Vote granted from 127.0.0.1:15580. Tally: 1
2016/03/28 06:01:29 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:29 [INFO] raft: Node at 127.0.0.1:15580 [Leader] entering Leader state
2016/03/28 06:01:29 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:29 [INFO] consul: New leader elected: Node 15579
2016/03/28 06:01:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:30 [DEBUG] raft: Node 127.0.0.1:15580 updated peer set (2): [127.0.0.1:15580]
2016/03/28 06:01:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:30 [INFO] consul: member 'Node 15579' joined, marking health alive
2016/03/28 06:01:30 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:30 [DEBUG] raft: Vote granted from 127.0.0.1:15584. Tally: 1
2016/03/28 06:01:30 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:30 [INFO] raft: Node at 127.0.0.1:15584 [Leader] entering Leader state
2016/03/28 06:01:30 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:30 [INFO] consul: New leader elected: Node 15583
2016/03/28 06:01:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:30 [DEBUG] raft: Node 127.0.0.1:15584 updated peer set (2): [127.0.0.1:15584]
2016/03/28 06:01:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:30 [INFO] consul: member 'Node 15583' joined, marking health alive
2016/03/28 06:01:31 [DEBUG] memberlist: TCP connection from=127.0.0.1:36828
2016/03/28 06:01:31 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15578
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/28 06:01:31 [DEBUG] memberlist: TCP connection from=127.0.0.1:36829
2016/03/28 06:01:31 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15578
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/28 06:01:31 [INFO] consul: shutting down server
2016/03/28 06:01:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] memberlist: Potential blocking operation. Last command took 10.238333ms
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/28 06:01:31 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/28 06:01:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:31 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/28 06:01:31 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/28 06:01:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:01:31 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:31 [INFO] consul: shutting down server
2016/03/28 06:01:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:31 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/28 06:01:31 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/28 06:01:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:31 [INFO] memberlist: Marking Node 15583.dc2 as failed, suspect timeout reached
2016/03/28 06:01:31 [INFO] serf: EventMemberFailed: Node 15583.dc2 127.0.0.1
2016/03/28 06:01:31 [INFO] memberlist: Marking Node 15583.dc2 as failed, suspect timeout reached
2016/03/28 06:01:31 [INFO] serf: EventMemberFailed: Node 15583.dc2 127.0.0.1
2016/03/28 06:01:31 [INFO] consul: removing WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/28 06:01:31 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/28 06:01:31 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/28 06:01:32 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:01:32 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/28 06:01:32 [INFO] consul: shutting down server
2016/03/28 06:01:32 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:32 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:32 [DEBUG] memberlist: Failed UDP ping: Node 15579.dc1 (timeout reached)
2016/03/28 06:01:32 [INFO] memberlist: Suspect Node 15579.dc1 has failed, no acks received
2016/03/28 06:01:32 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestRTT_getDatacentersByDistance (4.45s)
=== RUN   TestUserEventNames
--- PASS: TestUserEventNames (0.00s)
=== RUN   TestServer_StartStop
2016/03/28 06:01:32 [INFO] memberlist: Marking Node 15579.dc1 as failed, suspect timeout reached
2016/03/28 06:01:32 [INFO] serf: EventMemberFailed: Node 15579.dc1 127.0.0.1
2016/03/28 06:01:32 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state
2016/03/28 06:01:32 [INFO] serf: EventMemberJoin: testbuildd.raspbian.org 127.0.0.1
2016/03/28 06:01:32 [INFO] consul: adding LAN server testbuildd.raspbian.org (Addr: 127.0.0.1:8300) (DC: dc1)
2016/03/28 06:01:32 [INFO] serf: EventMemberJoin: testbuildd.raspbian.org.dc1 127.0.0.1
2016/03/28 06:01:32 [INFO] consul: shutting down server
2016/03/28 06:01:32 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:32 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:32 [INFO] consul: shutting down server
--- PASS: TestServer_StartStop (0.50s)
=== RUN   TestServer_JoinLAN
2016/03/28 06:01:33 [INFO] raft: Node at 127.0.0.1:15588 [Follower] entering Follower state
2016/03/28 06:01:33 [INFO] serf: EventMemberJoin: Node 15587 127.0.0.1
2016/03/28 06:01:33 [INFO] consul: adding LAN server Node 15587 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/28 06:01:33 [INFO] serf: EventMemberJoin: Node 15587.dc1 127.0.0.1
2016/03/28 06:01:33 [INFO] consul: adding WAN server Node 15587.dc1 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/28 06:01:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:33 [INFO] raft: Node at 127.0.0.1:15588 [Candidate] entering Candidate state
2016/03/28 06:01:34 [INFO] raft: Node at 127.0.0.1:15592 [Follower] entering Follower state
2016/03/28 06:01:34 [INFO] serf: EventMemberJoin: Node 15591 127.0.0.1
2016/03/28 06:01:34 [INFO] consul: adding LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/28 06:01:34 [INFO] serf: EventMemberJoin: Node 15591.dc1 127.0.0.1
2016/03/28 06:01:34 [INFO] consul: adding WAN server Node 15591.dc1 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/28 06:01:34 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15589
2016/03/28 06:01:34 [DEBUG] memberlist: TCP connection from=127.0.0.1:58380
2016/03/28 06:01:34 [INFO] serf: EventMemberJoin: Node 15591 127.0.0.1
2016/03/28 06:01:34 [INFO] serf: EventMemberJoin: Node 15587 127.0.0.1
2016/03/28 06:01:34 [INFO] consul: adding LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/28 06:01:34 [INFO] consul: adding LAN server Node 15587 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/28 06:01:34 [INFO] consul: shutting down server
2016/03/28 06:01:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:34 [INFO] raft: Node at 127.0.0.1:15592 [Candidate] entering Candidate state
2016/03/28 06:01:34 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:34 [DEBUG] raft: Vote granted from 127.0.0.1:15588. Tally: 1
2016/03/28 06:01:34 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:34 [INFO] raft: Node at 127.0.0.1:15588 [Leader] entering Leader state
2016/03/28 06:01:34 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:34 [INFO] consul: New leader elected: Node 15587
2016/03/28 06:01:34 [DEBUG] memberlist: Failed UDP ping: Node 15591 (timeout reached)
2016/03/28 06:01:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:34 [INFO] memberlist: Suspect Node 15591 has failed, no acks received
2016/03/28 06:01:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:34 [DEBUG] raft: Node 127.0.0.1:15588 updated peer set (2): [127.0.0.1:15588]
2016/03/28 06:01:34 [DEBUG] memberlist: Failed UDP ping: Node 15591 (timeout reached)
2016/03/28 06:01:34 [INFO] memberlist: Suspect Node 15591 has failed, no acks received
2016/03/28 06:01:34 [INFO] memberlist: Marking Node 15591 as failed, suspect timeout reached
2016/03/28 06:01:34 [INFO] serf: EventMemberFailed: Node 15591 127.0.0.1
2016/03/28 06:01:34 [INFO] consul: removing LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/28 06:01:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:34 [INFO] consul: member 'Node 15587' joined, marking health alive
2016/03/28 06:01:34 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:34 [INFO] consul: member 'Node 15591' failed, marking health critical
2016/03/28 06:01:34 [INFO] consul: shutting down server
2016/03/28 06:01:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:34 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestServer_JoinLAN (2.14s)
=== RUN   TestServer_JoinWAN
2016/03/28 06:01:35 [INFO] raft: Node at 127.0.0.1:15596 [Follower] entering Follower state
2016/03/28 06:01:35 [INFO] serf: EventMemberJoin: Node 15595 127.0.0.1
2016/03/28 06:01:35 [INFO] consul: adding LAN server Node 15595 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/28 06:01:35 [INFO] serf: EventMemberJoin: Node 15595.dc1 127.0.0.1
2016/03/28 06:01:35 [INFO] consul: adding WAN server Node 15595.dc1 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/28 06:01:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:35 [INFO] raft: Node at 127.0.0.1:15596 [Candidate] entering Candidate state
2016/03/28 06:01:36 [INFO] raft: Node at 127.0.0.1:15600 [Follower] entering Follower state
2016/03/28 06:01:36 [INFO] serf: EventMemberJoin: Node 15599 127.0.0.1
2016/03/28 06:01:36 [INFO] consul: adding LAN server Node 15599 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/28 06:01:36 [INFO] serf: EventMemberJoin: Node 15599.dc2 127.0.0.1
2016/03/28 06:01:36 [INFO] consul: adding WAN server Node 15599.dc2 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/28 06:01:36 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15598
2016/03/28 06:01:36 [DEBUG] memberlist: TCP connection from=127.0.0.1:54154
2016/03/28 06:01:36 [INFO] serf: EventMemberJoin: Node 15599.dc2 127.0.0.1
2016/03/28 06:01:36 [INFO] serf: EventMemberJoin: Node 15595.dc1 127.0.0.1
2016/03/28 06:01:36 [INFO] consul: adding WAN server Node 15599.dc2 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/28 06:01:36 [INFO] consul: adding WAN server Node 15595.dc1 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/28 06:01:36 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/28 06:01:36 [INFO] consul: shutting down server
2016/03/28 06:01:36 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:36 [INFO] raft: Node at 127.0.0.1:15600 [Candidate] entering Candidate state
2016/03/28 06:01:36 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:36 [DEBUG] raft: Vote granted from 127.0.0.1:15596. Tally: 1
2016/03/28 06:01:36 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:36 [INFO] raft: Node at 127.0.0.1:15596 [Leader] entering Leader state
2016/03/28 06:01:36 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:36 [INFO] consul: New leader elected: Node 15595
2016/03/28 06:01:36 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/28 06:01:36 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/28 06:01:36 [DEBUG] memberlist: Potential blocking operation. Last command took 10.758333ms
2016/03/28 06:01:36 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/28 06:01:36 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/28 06:01:36 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:36 [DEBUG] memberlist: Failed UDP ping: Node 15599.dc2 (timeout reached)
2016/03/28 06:01:36 [INFO] memberlist: Suspect Node 15599.dc2 has failed, no acks received
2016/03/28 06:01:36 [DEBUG] raft: Node 127.0.0.1:15596 updated peer set (2): [127.0.0.1:15596]
2016/03/28 06:01:36 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:36 [INFO] consul: member 'Node 15595' joined, marking health alive
2016/03/28 06:01:36 [DEBUG] memberlist: Failed UDP ping: Node 15599.dc2 (timeout reached)
2016/03/28 06:01:36 [INFO] memberlist: Suspect Node 15599.dc2 has failed, no acks received
2016/03/28 06:01:36 [INFO] memberlist: Marking Node 15599.dc2 as failed, suspect timeout reached
2016/03/28 06:01:36 [INFO] serf: EventMemberFailed: Node 15599.dc2 127.0.0.1
2016/03/28 06:01:36 [INFO] consul: removing WAN server Node 15599.dc2 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/28 06:01:36 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:36 [INFO] consul: shutting down server
2016/03/28 06:01:36 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:36 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_JoinWAN (2.18s)
=== RUN   TestServer_JoinSeparateLanAndWanAddresses
2016/03/28 06:01:37 [INFO] raft: Node at 127.0.0.1:15604 [Follower] entering Follower state
2016/03/28 06:01:37 [INFO] serf: EventMemberJoin: Node 15603 127.0.0.1
2016/03/28 06:01:37 [INFO] consul: adding LAN server Node 15603 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/28 06:01:37 [INFO] serf: EventMemberJoin: Node 15603.dc1 127.0.0.1
2016/03/28 06:01:37 [INFO] consul: adding WAN server Node 15603.dc1 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/28 06:01:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:37 [INFO] raft: Node at 127.0.0.1:15604 [Candidate] entering Candidate state
2016/03/28 06:01:38 [INFO] raft: Node at 127.0.0.1:15608 [Follower] entering Follower state
2016/03/28 06:01:38 [INFO] serf: EventMemberJoin: s2 127.0.0.3
2016/03/28 06:01:38 [INFO] consul: adding LAN server s2 (Addr: 127.0.0.3:15608) (DC: dc2)
2016/03/28 06:01:38 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/28 06:01:38 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/28 06:01:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:38 [INFO] raft: Node at 127.0.0.1:15608 [Candidate] entering Candidate state
2016/03/28 06:01:38 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:38 [DEBUG] raft: Vote granted from 127.0.0.1:15604. Tally: 1
2016/03/28 06:01:38 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:38 [INFO] raft: Node at 127.0.0.1:15604 [Leader] entering Leader state
2016/03/28 06:01:38 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:38 [INFO] consul: New leader elected: Node 15603
2016/03/28 06:01:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:39 [DEBUG] raft: Node 127.0.0.1:15604 updated peer set (2): [127.0.0.1:15604]
2016/03/28 06:01:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:39 [INFO] consul: member 'Node 15603' joined, marking health alive
2016/03/28 06:01:39 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:39 [DEBUG] raft: Vote granted from 127.0.0.1:15608. Tally: 1
2016/03/28 06:01:39 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:39 [INFO] raft: Node at 127.0.0.1:15608 [Leader] entering Leader state
2016/03/28 06:01:39 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:39 [INFO] consul: New leader elected: s2
2016/03/28 06:01:39 [INFO] raft: Node at 127.0.0.1:15612 [Follower] entering Follower state
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: Node 15611 127.0.0.1
2016/03/28 06:01:39 [INFO] consul: adding LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: Node 15611.dc2 127.0.0.1
2016/03/28 06:01:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15606
2016/03/28 06:01:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:59628
2016/03/28 06:01:39 [INFO] consul: adding WAN server Node 15611.dc2 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: Node 15603.dc1 127.0.0.1
2016/03/28 06:01:39 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/28 06:01:39 [INFO] consul: adding WAN server Node 15603.dc1 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/28 06:01:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15609
2016/03/28 06:01:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:52215
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: Node 15611 127.0.0.1
2016/03/28 06:01:39 [INFO] serf: EventMemberJoin: s2 127.0.0.3
2016/03/28 06:01:39 [INFO] consul: adding LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/28 06:01:39 [INFO] consul: adding LAN server s2 (Addr: 127.0.0.3:15608) (DC: dc2)
2016/03/28 06:01:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: Node 15611
2016/03/28 06:01:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/28 06:01:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:39 [INFO] raft: Node at 127.0.0.1:15612 [Candidate] entering Candidate state
2016/03/28 06:01:39 [INFO] consul: shutting down server
2016/03/28 06:01:39 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:39 [DEBUG] memberlist: Potential blocking operation. Last command took 10.838ms
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/28 06:01:39 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:39 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:39 [WARN] memberlist: Refuting a suspect message (from: Node 15603.dc1)
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/28 06:01:39 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/28 06:01:39 [DEBUG] memberlist: Failed UDP ping: Node 15611 (timeout reached)
2016/03/28 06:01:39 [INFO] memberlist: Suspect Node 15611 has failed, no acks received
2016/03/28 06:01:39 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:39 [DEBUG] memberlist: Failed UDP ping: Node 15611 (timeout reached)
2016/03/28 06:01:39 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [INFO] memberlist: Suspect Node 15611 has failed, no acks received
2016/03/28 06:01:40 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:40 [INFO] memberlist: Marking Node 15611 as failed, suspect timeout reached
2016/03/28 06:01:40 [INFO] serf: EventMemberFailed: Node 15611 127.0.0.1
2016/03/28 06:01:40 [INFO] consul: removing LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/28 06:01:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:40 [DEBUG] raft: Node 127.0.0.1:15608 updated peer set (2): [127.0.0.1:15608]
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: Node 15611 (timeout reached)
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [INFO] memberlist: Suspect Node 15611 has failed, no acks received
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [WARN] memberlist: Refuting a suspect message (from: Node 15603.dc1)
2016/03/28 06:01:40 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:40 [INFO] consul: member 's2' joined, marking health alive
2016/03/28 06:01:40 [INFO] memberlist: Marking s2.dc2 as failed, suspect timeout reached
2016/03/28 06:01:40 [INFO] serf: EventMemberFailed: s2.dc2 127.0.0.2
2016/03/28 06:01:40 [INFO] consul: removing WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/28 06:01:40 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/28 06:01:40 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:40 [INFO] consul: member 'Node 15611' failed, marking health critical
2016/03/28 06:01:40 [INFO] consul: shutting down server
2016/03/28 06:01:40 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:40 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/28 06:01:40 [ERR] consul: failed to reconcile member: {Node 15611 127.0.0.1 15613 map[role:consul dc:dc2 vsn:2 vsn_min:1 vsn_max:3 build: port:15612 bootstrap:1] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 06:01:40 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 06:01:40 [INFO] consul: shutting down server
2016/03/28 06:01:40 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:40 [INFO] memberlist: Marking s2.dc2 as failed, suspect timeout reached
2016/03/28 06:01:40 [INFO] serf: EventMemberFailed: s2.dc2 127.0.0.2
2016/03/28 06:01:40 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:40 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/28 06:01:40 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/28 06:01:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestServer_JoinSeparateLanAndWanAddresses (4.01s)
=== RUN   TestServer_LeaveLeader
2016/03/28 06:01:42 [INFO] raft: Node at 127.0.0.1:15616 [Follower] entering Follower state
2016/03/28 06:01:42 [INFO] serf: EventMemberJoin: Node 15615 127.0.0.1
2016/03/28 06:01:42 [INFO] consul: adding LAN server Node 15615 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/28 06:01:42 [INFO] serf: EventMemberJoin: Node 15615.dc1 127.0.0.1
2016/03/28 06:01:42 [INFO] consul: adding WAN server Node 15615.dc1 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/28 06:01:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:42 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:43 [INFO] raft: Node at 127.0.0.1:15620 [Follower] entering Follower state
2016/03/28 06:01:43 [INFO] serf: EventMemberJoin: Node 15619 127.0.0.1
2016/03/28 06:01:43 [INFO] consul: adding LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/28 06:01:43 [INFO] serf: EventMemberJoin: Node 15619.dc1 127.0.0.1
2016/03/28 06:01:43 [INFO] consul: adding WAN server Node 15619.dc1 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/28 06:01:43 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15617
2016/03/28 06:01:43 [DEBUG] memberlist: TCP connection from=127.0.0.1:49597
2016/03/28 06:01:43 [INFO] serf: EventMemberJoin: Node 15619 127.0.0.1
2016/03/28 06:01:43 [INFO] consul: adding LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/28 06:01:43 [INFO] serf: EventMemberJoin: Node 15615 127.0.0.1
2016/03/28 06:01:43 [INFO] consul: adding LAN server Node 15615 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/28 06:01:43 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:43 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:43 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:43 [INFO] raft: Node at 127.0.0.1:15616 [Leader] entering Leader state
2016/03/28 06:01:43 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:43 [INFO] consul: New leader elected: Node 15615
2016/03/28 06:01:43 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [INFO] consul: New leader elected: Node 15615
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageJoinType: Node 15619
2016/03/28 06:01:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:43 [DEBUG] raft: Node 127.0.0.1:15616 updated peer set (2): [127.0.0.1:15616]
2016/03/28 06:01:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:43 [INFO] consul: member 'Node 15615' joined, marking health alive
2016/03/28 06:01:43 [DEBUG] raft: Node 127.0.0.1:15616 updated peer set (2): [127.0.0.1:15620 127.0.0.1:15616]
2016/03/28 06:01:43 [INFO] raft: Added peer 127.0.0.1:15620, starting replication
2016/03/28 06:01:43 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:54996
2016/03/28 06:01:43 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:54997
2016/03/28 06:01:43 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 06:01:43 [DEBUG] raft: Failed to contact 127.0.0.1:15620 in 164.178333ms
2016/03/28 06:01:43 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 06:01:43 [WARN] raft: AppendEntries to 127.0.0.1:15620 rejected, sending older logs (next: 1)
2016/03/28 06:01:43 [INFO] raft: Node at 127.0.0.1:15616 [Follower] entering Follower state
2016/03/28 06:01:43 [INFO] consul: cluster leadership lost
2016/03/28 06:01:43 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 06:01:43 [ERR] consul: failed to reconcile member: {Node 15619 127.0.0.1 15621 map[build: port:15620 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 06:01:43 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 06:01:43 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 06:01:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:43 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:44 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:54998
2016/03/28 06:01:44 [DEBUG] raft: Node 127.0.0.1:15620 updated peer set (2): [127.0.0.1:15616]
2016/03/28 06:01:44 [INFO] raft: pipelining replication to peer 127.0.0.1:15620
2016/03/28 06:01:44 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15620
2016/03/28 06:01:44 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:44 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:44 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:44 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:45 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:45 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:45 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:45 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:46 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:46 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:46 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:46 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:46 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:46 [DEBUG] raft-net: 127.0.0.1:15616 accepted connection from: 127.0.0.1:47522
2016/03/28 06:01:46 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:46 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:46 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:01:46 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:46 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:47 [DEBUG] memberlist: Potential blocking operation. Last command took 10.384334ms
2016/03/28 06:01:47 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:47 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:01:47 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:47 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:47 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:48 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:48 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:01:48 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:48 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:48 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:48 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:48 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:01:48 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:48 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:48 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:49 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:49 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:49 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:01:49 [DEBUG] memberlist: Potential blocking operation. Last command took 12.64ms
2016/03/28 06:01:49 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:49 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:49 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:49 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:01:49 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:49 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:49 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:50 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:50 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:50 [INFO] raft: Node at 127.0.0.1:15620 [Follower] entering Follower state
2016/03/28 06:01:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:50 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:51 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:51 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:01:51 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:51 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:51 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:51 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:51 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:51 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:01:51 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:51 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:52 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:52 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:52 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:01:52 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:52 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:52 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:52 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:52 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:01:52 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:52 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:52 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:52 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:01:52 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:52 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:52 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:52 [DEBUG] memberlist: Potential blocking operation. Last command took 11.718666ms
2016/03/28 06:01:53 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:53 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:01:53 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:53 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:53 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/28 06:01:53 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:53 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:53 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:01:53 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:53 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:54 [INFO] consul: shutting down server
2016/03/28 06:01:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:54 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:54 [DEBUG] raft: Vote granted from 127.0.0.1:15620. Tally: 1
2016/03/28 06:01:54 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:01:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:54 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:01:54 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15620: EOF
2016/03/28 06:01:54 [DEBUG] memberlist: Failed UDP ping: Node 15619 (timeout reached)
2016/03/28 06:01:54 [INFO] memberlist: Suspect Node 15619 has failed, no acks received
2016/03/28 06:01:54 [DEBUG] memberlist: Failed UDP ping: Node 15619 (timeout reached)
2016/03/28 06:01:54 [INFO] memberlist: Suspect Node 15619 has failed, no acks received
2016/03/28 06:01:54 [INFO] memberlist: Marking Node 15619 as failed, suspect timeout reached
2016/03/28 06:01:54 [INFO] serf: EventMemberFailed: Node 15619 127.0.0.1
2016/03/28 06:01:54 [INFO] consul: removing LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/28 06:01:54 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:54 [DEBUG] raft: Vote granted from 127.0.0.1:15616. Tally: 1
2016/03/28 06:01:54 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:54 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/28 06:01:54 [INFO] consul: shutting down server
2016/03/28 06:01:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:54 [WARN] serf: Shutdown without a Leave
2016/03/28 06:01:54 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:01:54 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15620: EOF
2016/03/28 06:01:55 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_LeaveLeader (14.25s)
	server_test.go:353: should have 2 peers: [127.0.0.1:15620 127.0.0.1:15616]
=== RUN   TestServer_Leave
2016/03/28 06:01:55 [INFO] raft: Node at 127.0.0.1:15624 [Follower] entering Follower state
2016/03/28 06:01:55 [INFO] serf: EventMemberJoin: Node 15623 127.0.0.1
2016/03/28 06:01:55 [INFO] consul: adding LAN server Node 15623 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/28 06:01:55 [INFO] serf: EventMemberJoin: Node 15623.dc1 127.0.0.1
2016/03/28 06:01:55 [INFO] consul: adding WAN server Node 15623.dc1 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/28 06:01:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:55 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:01:56 [DEBUG] raft: Votes needed: 1
2016/03/28 06:01:56 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:01:56 [INFO] raft: Election won. Tally: 1
2016/03/28 06:01:56 [INFO] raft: Node at 127.0.0.1:15624 [Leader] entering Leader state
2016/03/28 06:01:56 [INFO] raft: Node at 127.0.0.1:15628 [Follower] entering Follower state
2016/03/28 06:01:56 [INFO] consul: cluster leadership acquired
2016/03/28 06:01:56 [INFO] consul: New leader elected: Node 15623
2016/03/28 06:01:56 [INFO] serf: EventMemberJoin: Node 15627 127.0.0.1
2016/03/28 06:01:56 [INFO] consul: adding LAN server Node 15627 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/28 06:01:56 [INFO] serf: EventMemberJoin: Node 15627.dc1 127.0.0.1
2016/03/28 06:01:56 [INFO] consul: adding WAN server Node 15627.dc1 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/28 06:01:56 [DEBUG] memberlist: TCP connection from=127.0.0.1:34714
2016/03/28 06:01:56 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15625
2016/03/28 06:01:56 [INFO] serf: EventMemberJoin: Node 15627 127.0.0.1
2016/03/28 06:01:56 [INFO] consul: adding LAN server Node 15627 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/28 06:01:56 [INFO] serf: EventMemberJoin: Node 15623 127.0.0.1
2016/03/28 06:01:56 [INFO] consul: adding LAN server Node 15623 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/28 06:01:56 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:01:56 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:56 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] raft: Node 127.0.0.1:15624 updated peer set (2): [127.0.0.1:15624]
2016/03/28 06:01:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:01:56 [DEBUG] raft: Node 127.0.0.1:15624 updated peer set (2): [127.0.0.1:15628 127.0.0.1:15624]
2016/03/28 06:01:56 [INFO] raft: Added peer 127.0.0.1:15628, starting replication
2016/03/28 06:01:56 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:36833
2016/03/28 06:01:56 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:36834
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] serf: messageJoinType: Node 15627
2016/03/28 06:01:56 [DEBUG] raft: Failed to contact 127.0.0.1:15628 in 164.725ms
2016/03/28 06:01:56 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 06:01:56 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/03/28 06:01:56 [INFO] raft: Node at 127.0.0.1:15624 [Follower] entering Follower state
2016/03/28 06:01:56 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 06:01:56 [INFO] consul: cluster leadership lost
2016/03/28 06:01:56 [ERR] consul: failed to reconcile member: {Node 15627 127.0.0.1 15629 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15628] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 06:01:56 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 06:01:56 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 06:01:56 [WARN] raft: AppendEntries to 127.0.0.1:15628 rejected, sending older logs (next: 1)
2016/03/28 06:01:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:01:56 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:01:57 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:36835
2016/03/28 06:01:57 [DEBUG] raft: Node 127.0.0.1:15628 updated peer set (2): [127.0.0.1:15624]
2016/03/28 06:01:57 [INFO] raft: pipelining replication to peer 127.0.0.1:15628
2016/03/28 06:01:57 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15628
2016/03/28 06:01:58 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:58 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:01:58 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:58 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:01:58 [DEBUG] raft: Votes needed: 2
2016/03/28 06:01:58 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:01:59 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:01:59 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:01:59 [DEBUG] memberlist: Potential blocking operation. Last command took 11.676ms
2016/03/28 06:02:00 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:00 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:00 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:00 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:00 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:00 [DEBUG] raft-net: 127.0.0.1:15624 accepted connection from: 127.0.0.1:34445
2016/03/28 06:02:01 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:01 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:01 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:02:01 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:01 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:01 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:01 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:02:01 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:01 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:01 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:02 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:02 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:02 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:02:02 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:02 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:02 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:02 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:02 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:02:02 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:02 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:02 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:02 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:02 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:02:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:03 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:03 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:03 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:02:03 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:03 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:03 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:03 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:03 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 06:02:03 [DEBUG] memberlist: Potential blocking operation. Last command took 13.214333ms
2016/03/28 06:02:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:03 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:03 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:03 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 06:02:03 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:03 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:03 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:04 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:04 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:04 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:02:04 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:04 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:04 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:04 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:04 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:02:04 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:04 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:05 [DEBUG] memberlist: Potential blocking operation. Last command took 19.661333ms
2016/03/28 06:02:05 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:05 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:05 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:02:05 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:02:05 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:05 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:05 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:05 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:05 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:05 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:06 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:06 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:02:06 [DEBUG] raft: Vote granted from 127.0.0.1:15628. Tally: 1
2016/03/28 06:02:06 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:06 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/28 06:02:06 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:06 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:02:06 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:06 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:06 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:07 [INFO] consul: shutting down server
2016/03/28 06:02:07 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:07 [DEBUG] memberlist: Failed UDP ping: Node 15627 (timeout reached)
2016/03/28 06:02:07 [INFO] memberlist: Suspect Node 15627 has failed, no acks received
2016/03/28 06:02:07 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:07 [DEBUG] memberlist: Failed UDP ping: Node 15627 (timeout reached)
2016/03/28 06:02:07 [INFO] memberlist: Suspect Node 15627 has failed, no acks received
2016/03/28 06:02:07 [INFO] memberlist: Marking Node 15627 as failed, suspect timeout reached
2016/03/28 06:02:07 [INFO] serf: EventMemberFailed: Node 15627 127.0.0.1
2016/03/28 06:02:07 [INFO] consul: removing LAN server Node 15627 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/28 06:02:07 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:07 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15628: EOF
2016/03/28 06:02:07 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:07 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:07 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:02:07 [DEBUG] raft: Vote granted from 127.0.0.1:15624. Tally: 1
2016/03/28 06:02:07 [INFO] consul: shutting down server
2016/03/28 06:02:07 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:07 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:07 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/28 06:02:07 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:07 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:07 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15628: EOF
2016/03/28 06:02:08 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_Leave (12.99s)
	server_test.go:408: should have 2 peers: [127.0.0.1:15628 127.0.0.1:15624]
=== RUN   TestServer_RPC
2016/03/28 06:02:08 [INFO] raft: Node at 127.0.0.1:15632 [Follower] entering Follower state
2016/03/28 06:02:08 [INFO] serf: EventMemberJoin: Node 15631 127.0.0.1
2016/03/28 06:02:08 [INFO] consul: adding LAN server Node 15631 (Addr: 127.0.0.1:15632) (DC: dc1)
2016/03/28 06:02:08 [INFO] serf: EventMemberJoin: Node 15631.dc1 127.0.0.1
2016/03/28 06:02:08 [INFO] consul: shutting down server
2016/03/28 06:02:08 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:08 [INFO] raft: Node at 127.0.0.1:15632 [Candidate] entering Candidate state
2016/03/28 06:02:08 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:09 [DEBUG] raft: Votes needed: 1
--- PASS: TestServer_RPC (1.02s)
=== RUN   TestServer_JoinLAN_TLS
2016/03/28 06:02:09 [INFO] raft: Node at 127.0.0.1:15635 [Follower] entering Follower state
2016/03/28 06:02:09 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/28 06:02:09 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/28 06:02:09 [INFO] serf: EventMemberJoin: a.testco.internal.dc1 127.0.0.1
2016/03/28 06:02:09 [INFO] consul: adding WAN server a.testco.internal.dc1 (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/28 06:02:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:09 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/28 06:02:10 [INFO] raft: Node at 127.0.0.1:15638 [Follower] entering Follower state
2016/03/28 06:02:10 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/28 06:02:10 [INFO] consul: adding LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/28 06:02:10 [INFO] serf: EventMemberJoin: b.testco.internal.dc1 127.0.0.1
2016/03/28 06:02:10 [DEBUG] memberlist: TCP connection from=127.0.0.1:57686
2016/03/28 06:02:10 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15636
2016/03/28 06:02:10 [INFO] consul: adding WAN server b.testco.internal.dc1 (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/28 06:02:10 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/28 06:02:10 [INFO] consul: adding LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/28 06:02:10 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/28 06:02:10 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/28 06:02:10 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:10 [DEBUG] raft: Vote granted from 127.0.0.1:15635. Tally: 1
2016/03/28 06:02:10 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:10 [INFO] raft: Node at 127.0.0.1:15635 [Leader] entering Leader state
2016/03/28 06:02:10 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:10 [INFO] consul: New leader elected: a.testco.internal
2016/03/28 06:02:10 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [INFO] consul: New leader elected: a.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [DEBUG] memberlist: Potential blocking operation. Last command took 15.687ms
2016/03/28 06:02:10 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/28 06:02:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:11 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/28 06:02:11 [DEBUG] raft: Node 127.0.0.1:15635 updated peer set (2): [127.0.0.1:15635]
2016/03/28 06:02:11 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:11 [INFO] consul: member 'a.testco.internal' joined, marking health alive
2016/03/28 06:02:11 [DEBUG] raft: Node 127.0.0.1:15635 updated peer set (2): [127.0.0.1:15638 127.0.0.1:15635]
2016/03/28 06:02:11 [INFO] raft: Added peer 127.0.0.1:15638, starting replication
2016/03/28 06:02:11 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:36229
2016/03/28 06:02:11 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:36230
2016/03/28 06:02:11 [DEBUG] raft: Failed to contact 127.0.0.1:15638 in 289.756334ms
2016/03/28 06:02:11 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/28 06:02:11 [INFO] raft: Node at 127.0.0.1:15635 [Follower] entering Follower state
2016/03/28 06:02:11 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/28 06:02:11 [INFO] consul: cluster leadership lost
2016/03/28 06:02:11 [ERR] consul: failed to reconcile member: {b.testco.internal 127.0.0.1 15639 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15638 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/28 06:02:11 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/28 06:02:11 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/28 06:02:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:11 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/28 06:02:11 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/28 06:02:11 [WARN] raft: AppendEntries to 127.0.0.1:15638 rejected, sending older logs (next: 1)
2016/03/28 06:02:12 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:36231
2016/03/28 06:02:12 [DEBUG] raft: Node 127.0.0.1:15638 updated peer set (2): [127.0.0.1:15635]
2016/03/28 06:02:12 [INFO] raft: pipelining replication to peer 127.0.0.1:15638
2016/03/28 06:02:12 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15638
2016/03/28 06:02:12 [INFO] consul: shutting down server
2016/03/28 06:02:12 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:12 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/03/28 06:02:12 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:12 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/03/28 06:02:12 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/03/28 06:02:12 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:12 [DEBUG] raft: Vote granted from 127.0.0.1:15635. Tally: 1
2016/03/28 06:02:12 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:12 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:12 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15638: EOF
2016/03/28 06:02:12 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/03/28 06:02:12 [INFO] memberlist: Marking b.testco.internal as failed, suspect timeout reached
2016/03/28 06:02:12 [INFO] serf: EventMemberFailed: b.testco.internal 127.0.0.1
2016/03/28 06:02:12 [INFO] consul: removing LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/28 06:02:12 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:12 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/28 06:02:13 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:13 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15638: EOF
2016/03/28 06:02:13 [INFO] consul: shutting down server
2016/03/28 06:02:13 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:14 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:14 [DEBUG] raft: Votes needed: 2
--- PASS: TestServer_JoinLAN_TLS (4.89s)
=== RUN   TestServer_Expect
2016/03/28 06:02:14 [INFO] raft: Node at 127.0.0.1:15642 [Follower] entering Follower state
2016/03/28 06:02:14 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/28 06:02:14 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/28 06:02:14 [INFO] serf: EventMemberJoin: Node 15641.dc1 127.0.0.1
2016/03/28 06:02:14 [INFO] consul: adding WAN server Node 15641.dc1 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/28 06:02:14 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:15 [INFO] raft: Node at 127.0.0.1:15646 [Follower] entering Follower state
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15645.dc1 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding WAN server Node 15645.dc1 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/28 06:02:15 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:15 [INFO] raft: Node at 127.0.0.1:15650 [Follower] entering Follower state
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15649.dc1 127.0.0.1
2016/03/28 06:02:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15643
2016/03/28 06:02:15 [INFO] consul: adding WAN server Node 15649.dc1 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:38249
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/28 06:02:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:38250
2016/03/28 06:02:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15643
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: Attempting bootstrap with nodes: [127.0.0.1:15646 127.0.0.1:15650 127.0.0.1:15642]
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/28 06:02:15 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:15 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] memberlist: Potential blocking operation. Last command took 13.143666ms
2016/03/28 06:02:15 [DEBUG] memberlist: Potential blocking operation. Last command took 21.216ms
2016/03/28 06:02:15 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [INFO] consul: Attempting bootstrap with nodes: [127.0.0.1:15646 127.0.0.1:15642 127.0.0.1:15650]
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:15 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:15 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:16 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:16 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:16 [DEBUG] serf: messageJoinType: Node 15645
2016/03/28 06:02:16 [DEBUG] serf: messageJoinType: Node 15649
2016/03/28 06:02:16 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:33308
2016/03/28 06:02:16 [DEBUG] raft-net: 127.0.0.1:15646 accepted connection from: 127.0.0.1:60043
2016/03/28 06:02:16 [DEBUG] raft-net: 127.0.0.1:15642 accepted connection from: 127.0.0.1:59284
2016/03/28 06:02:16 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:33311
2016/03/28 06:02:16 [DEBUG] memberlist: Potential blocking operation. Last command took 11.318666ms
2016/03/28 06:02:16 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:16 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:16 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/28 06:02:16 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:16 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:16 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:16 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/28 06:02:16 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:16 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:16 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:16 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/28 06:02:16 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:16 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:17 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:17 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:17 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/28 06:02:17 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:17 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:17 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:17 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:17 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/28 06:02:17 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:17 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:17 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/28 06:02:17 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:17 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:17 [DEBUG] memberlist: Potential blocking operation. Last command took 13.311ms
2016/03/28 06:02:18 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:18 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/28 06:02:18 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:18 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:18 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:18 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:18 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/28 06:02:18 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:18 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:18 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:18 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/28 06:02:18 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:18 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:18 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:33312
2016/03/28 06:02:18 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:18 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:18 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/28 06:02:19 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:19 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:19 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:19 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/28 06:02:19 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:19 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:19 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:19 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/28 06:02:19 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:19 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:19 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:19 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:19 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:19 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:02:19 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:02:19 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:19 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:19 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:19 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:19 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:20 [DEBUG] memberlist: Potential blocking operation. Last command took 13.313334ms
2016/03/28 06:02:20 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/28 06:02:20 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:20 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:20 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:20 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:20 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:02:20 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:02:20 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:20 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:20 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:20 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:20 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:20 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:21 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/28 06:02:21 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:21 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:21 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:21 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:21 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:21 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:02:21 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:02:21 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:21 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:21 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:21 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:21 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:21 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/28 06:02:21 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:21 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:21 [DEBUG] memberlist: Potential blocking operation. Last command took 20.993667ms
2016/03/28 06:02:22 [DEBUG] memberlist: Potential blocking operation. Last command took 17.363ms
2016/03/28 06:02:22 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:22 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:22 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:22 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 06:02:22 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 06:02:22 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:22 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:22 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:22 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:22 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:22 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/28 06:02:22 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:22 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:22 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:22 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:02:22 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:22 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:22 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:22 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:02:23 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:23 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:23 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:23 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:23 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/28 06:02:23 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:23 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:23 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:23 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:23 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:23 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:02:23 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:02:23 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:23 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:23 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:23 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:23 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:23 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/28 06:02:23 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:23 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:24 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:24 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:24 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:24 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:02:24 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:02:24 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:24 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:24 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:24 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:24 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/28 06:02:24 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:24 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:24 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:24 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:24 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:02:24 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:02:24 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:24 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:25 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:25 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:25 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:25 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:25 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/28 06:02:25 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:25 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:25 [DEBUG] memberlist: Potential blocking operation. Last command took 11.987666ms
2016/03/28 06:02:25 [DEBUG] memberlist: Potential blocking operation. Last command took 10.929334ms
2016/03/28 06:02:26 [DEBUG] memberlist: Potential blocking operation. Last command took 17.010666ms
2016/03/28 06:02:26 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:26 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:26 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:26 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:26 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 06:02:26 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 06:02:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:26 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:26 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:26 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:26 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/28 06:02:26 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/28 06:02:26 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/28 06:02:26 [DEBUG] memberlist: Potential blocking operation. Last command took 12.885ms
2016/03/28 06:02:26 [INFO] consul: shutting down server
2016/03/28 06:02:26 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:26 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/28 06:02:26 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:26 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/28 06:02:26 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:26 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:26 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/28 06:02:26 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/28 06:02:26 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/28 06:02:26 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/28 06:02:26 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/28 06:02:26 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/28 06:02:26 [INFO] memberlist: Marking Node 15649 as failed, suspect timeout reached
2016/03/28 06:02:26 [INFO] serf: EventMemberFailed: Node 15649 127.0.0.1
2016/03/28 06:02:26 [INFO] consul: removing LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:26 [INFO] memberlist: Marking Node 15649 as failed, suspect timeout reached
2016/03/28 06:02:26 [INFO] serf: EventMemberFailed: Node 15649 127.0.0.1
2016/03/28 06:02:26 [INFO] consul: removing LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/28 06:02:26 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:26 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:26 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/28 06:02:26 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/28 06:02:26 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:26 [DEBUG] raft: Vote granted from 127.0.0.1:15646. Tally: 1
2016/03/28 06:02:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:27 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/28 06:02:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:27 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:27 [INFO] consul: shutting down server
2016/03/28 06:02:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:27 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/28 06:02:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: dial tcp 127.0.0.1:15650: getsockopt: connection refused
2016/03/28 06:02:27 [DEBUG] memberlist: Failed UDP ping: Node 15645 (timeout reached)
2016/03/28 06:02:27 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/28 06:02:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15646: EOF
2016/03/28 06:02:27 [INFO] memberlist: Suspect Node 15645 has failed, no acks received
2016/03/28 06:02:27 [DEBUG] memberlist: Failed UDP ping: Node 15645 (timeout reached)
2016/03/28 06:02:27 [INFO] memberlist: Suspect Node 15645 has failed, no acks received
2016/03/28 06:02:27 [INFO] memberlist: Marking Node 15645 as failed, suspect timeout reached
2016/03/28 06:02:27 [INFO] serf: EventMemberFailed: Node 15645 127.0.0.1
2016/03/28 06:02:27 [INFO] consul: removing LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/28 06:02:27 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:27 [DEBUG] raft: Votes needed: 2
2016/03/28 06:02:27 [INFO] raft: Duplicate RequestVote for same term: 15
2016/03/28 06:02:27 [DEBUG] raft: Vote granted from 127.0.0.1:15642. Tally: 1
2016/03/28 06:02:27 [INFO] consul: shutting down server
2016/03/28 06:02:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:27 [WARN] raft: Election timeout reached, restarting election
2016/03/28 06:02:27 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/28 06:02:27 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: dial tcp 127.0.0.1:15650: getsockopt: connection refused
2016/03/28 06:02:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15646: dial tcp 127.0.0.1:15646: getsockopt: connection refused
2016/03/28 06:02:28 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_Expect (14.18s)
	server_test.go:566: should have 3 peers: []
=== RUN   TestServer_BadExpect
2016/03/28 06:02:28 [INFO] raft: Node at 127.0.0.1:15654 [Follower] entering Follower state
2016/03/28 06:02:28 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/28 06:02:28 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/28 06:02:28 [INFO] serf: EventMemberJoin: Node 15653.dc1 127.0.0.1
2016/03/28 06:02:28 [INFO] consul: adding WAN server Node 15653.dc1 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/28 06:02:28 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:29 [INFO] raft: Node at 127.0.0.1:15658 [Follower] entering Follower state
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15657.dc1 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding WAN server Node 15657.dc1 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/28 06:02:29 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:29 [INFO] raft: Node at 127.0.0.1:15662 [Follower] entering Follower state
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15661.dc1 127.0.0.1
2016/03/28 06:02:29 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15655
2016/03/28 06:02:29 [INFO] consul: adding WAN server Node 15661.dc1 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/28 06:02:29 [DEBUG] memberlist: TCP connection from=127.0.0.1:50981
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/28 06:02:29 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15658 expect:2 role:consul] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [ERR] consul: Member {Node 15653 127.0.0.1 15655 map[vsn_max:3 build: port:15654 expect:3 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15655
2016/03/28 06:02:29 [DEBUG] memberlist: TCP connection from=127.0.0.1:50982
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/28 06:02:29 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[expect:2 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15658] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/28 06:02:29 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[expect:2 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15658] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/28 06:02:29 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[expect:2 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15658] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/28 06:02:29 [INFO] consul: shutting down server
2016/03/28 06:02:29 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:29 [DEBUG] serf: messageJoinType: Node 15657
2016/03/28 06:02:29 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/28 06:02:29 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/28 06:02:29 [ERR] consul: Member {Node 15653 127.0.0.1 15655 map[vsn_min:1 vsn_max:3 build: port:15654 expect:3 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/28 06:02:29 [DEBUG] memberlist: Failed UDP ping: Node 15661 (timeout reached)
2016/03/28 06:02:29 [DEBUG] serf: messageJoinType: Node 15657
2016/03/28 06:02:29 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:30 [DEBUG] serf: messageJoinType: Node 15657
2016/03/28 06:02:30 [DEBUG] memberlist: Potential blocking operation. Last command took 13.003334ms
2016/03/28 06:02:30 [DEBUG] serf: messageJoinType: Node 15657
2016/03/28 06:02:30 [INFO] memberlist: Suspect Node 15661 has failed, no acks received
2016/03/28 06:02:30 [DEBUG] serf: messageJoinType: Node 15657
2016/03/28 06:02:30 [INFO] consul: shutting down server
2016/03/28 06:02:30 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:30 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:30 [DEBUG] memberlist: Failed UDP ping: Node 15661 (timeout reached)
2016/03/28 06:02:30 [INFO] memberlist: Suspect Node 15661 has failed, no acks received
2016/03/28 06:02:30 [INFO] consul: shutting down server
2016/03/28 06:02:30 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:30 [INFO] memberlist: Marking Node 15661 as failed, suspect timeout reached
2016/03/28 06:02:30 [INFO] serf: EventMemberFailed: Node 15661 127.0.0.1
2016/03/28 06:02:30 [INFO] memberlist: Marking Node 15661 as failed, suspect timeout reached
2016/03/28 06:02:30 [INFO] serf: EventMemberFailed: Node 15661 127.0.0.1
2016/03/28 06:02:30 [DEBUG] memberlist: Failed UDP ping: Node 15657 (timeout reached)
2016/03/28 06:02:30 [INFO] memberlist: Suspect Node 15657 has failed, no acks received
2016/03/28 06:02:30 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_BadExpect (2.04s)
=== RUN   TestServer_globalRPCErrors
2016/03/28 06:02:30 [INFO] memberlist: Marking Node 15657 as failed, suspect timeout reached
2016/03/28 06:02:30 [INFO] serf: EventMemberFailed: Node 15657 127.0.0.1
2016/03/28 06:02:31 [INFO] raft: Node at 127.0.0.1:15666 [Follower] entering Follower state
2016/03/28 06:02:31 [INFO] serf: EventMemberJoin: Node 15665 127.0.0.1
2016/03/28 06:02:31 [INFO] consul: adding LAN server Node 15665 (Addr: 127.0.0.1:15666) (DC: dc1)
2016/03/28 06:02:31 [INFO] serf: EventMemberJoin: Node 15665.dc1 127.0.0.1
2016/03/28 06:02:31 [INFO] consul: adding WAN server Node 15665.dc1 (Addr: 127.0.0.1:15666) (DC: dc1)
2016/03/28 06:02:31 [ERR] consul.rpc: RPC error: rpc: can't find service Bad.Method from=127.0.0.1:46297
2016/03/28 06:02:31 [INFO] consul: shutting down server
2016/03/28 06:02:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:31 [INFO] raft: Node at 127.0.0.1:15666 [Candidate] entering Candidate state
2016/03/28 06:02:31 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:31 [DEBUG] raft: Votes needed: 1
--- PASS: TestServer_globalRPCErrors (1.34s)
=== RUN   TestServer_Encrypted
2016/03/28 06:02:32 [INFO] raft: Node at 127.0.0.1:15670 [Follower] entering Follower state
2016/03/28 06:02:32 [INFO] serf: EventMemberJoin: Node 15669 127.0.0.1
2016/03/28 06:02:32 [INFO] consul: adding LAN server Node 15669 (Addr: 127.0.0.1:15670) (DC: dc1)
2016/03/28 06:02:32 [INFO] serf: EventMemberJoin: Node 15669.dc1 127.0.0.1
2016/03/28 06:02:32 [INFO] consul: adding WAN server Node 15669.dc1 (Addr: 127.0.0.1:15670) (DC: dc1)
2016/03/28 06:02:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:32 [INFO] raft: Node at 127.0.0.1:15670 [Candidate] entering Candidate state
2016/03/28 06:02:33 [INFO] raft: Node at 127.0.0.1:15674 [Follower] entering Follower state
2016/03/28 06:02:33 [INFO] serf: EventMemberJoin: Node 15673 127.0.0.1
2016/03/28 06:02:33 [INFO] consul: adding LAN server Node 15673 (Addr: 127.0.0.1:15674) (DC: dc1)
2016/03/28 06:02:33 [INFO] serf: EventMemberJoin: Node 15673.dc1 127.0.0.1
2016/03/28 06:02:33 [INFO] consul: shutting down server
2016/03/28 06:02:33 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:33 [INFO] consul: adding WAN server Node 15673.dc1 (Addr: 127.0.0.1:15674) (DC: dc1)
2016/03/28 06:02:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:33 [INFO] raft: Node at 127.0.0.1:15674 [Candidate] entering Candidate state
2016/03/28 06:02:33 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:33 [DEBUG] raft: Vote granted from 127.0.0.1:15670. Tally: 1
2016/03/28 06:02:33 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:33 [INFO] raft: Node at 127.0.0.1:15670 [Leader] entering Leader state
2016/03/28 06:02:33 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:33 [INFO] consul: New leader elected: Node 15669
2016/03/28 06:02:33 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:34 [DEBUG] raft: Node 127.0.0.1:15670 updated peer set (2): [127.0.0.1:15670]
2016/03/28 06:02:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:34 [INFO] consul: member 'Node 15669' joined, marking health alive
2016/03/28 06:02:34 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:34 [INFO] consul: shutting down server
2016/03/28 06:02:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:34 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestServer_Encrypted (3.52s)
=== RUN   TestSessionEndpoint_Apply
2016/03/28 06:02:35 [INFO] raft: Node at 127.0.0.1:15678 [Follower] entering Follower state
2016/03/28 06:02:35 [INFO] serf: EventMemberJoin: Node 15677 127.0.0.1
2016/03/28 06:02:35 [INFO] consul: adding LAN server Node 15677 (Addr: 127.0.0.1:15678) (DC: dc1)
2016/03/28 06:02:35 [INFO] serf: EventMemberJoin: Node 15677.dc1 127.0.0.1
2016/03/28 06:02:35 [INFO] consul: adding WAN server Node 15677.dc1 (Addr: 127.0.0.1:15678) (DC: dc1)
2016/03/28 06:02:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:36 [INFO] raft: Node at 127.0.0.1:15678 [Candidate] entering Candidate state
2016/03/28 06:02:36 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:36 [DEBUG] raft: Vote granted from 127.0.0.1:15678. Tally: 1
2016/03/28 06:02:36 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:36 [INFO] raft: Node at 127.0.0.1:15678 [Leader] entering Leader state
2016/03/28 06:02:36 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:36 [INFO] consul: New leader elected: Node 15677
2016/03/28 06:02:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:36 [DEBUG] raft: Node 127.0.0.1:15678 updated peer set (2): [127.0.0.1:15678]
2016/03/28 06:02:36 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:36 [INFO] consul: member 'Node 15677' joined, marking health alive
2016/03/28 06:02:37 [INFO] consul: shutting down server
2016/03/28 06:02:37 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:37 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:38 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestSessionEndpoint_Apply (2.73s)
=== RUN   TestSessionEndpoint_DeleteApply
2016/03/28 06:02:38 [INFO] raft: Node at 127.0.0.1:15682 [Follower] entering Follower state
2016/03/28 06:02:38 [INFO] serf: EventMemberJoin: Node 15681 127.0.0.1
2016/03/28 06:02:38 [INFO] consul: adding LAN server Node 15681 (Addr: 127.0.0.1:15682) (DC: dc1)
2016/03/28 06:02:38 [INFO] serf: EventMemberJoin: Node 15681.dc1 127.0.0.1
2016/03/28 06:02:38 [INFO] consul: adding WAN server Node 15681.dc1 (Addr: 127.0.0.1:15682) (DC: dc1)
2016/03/28 06:02:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:38 [INFO] raft: Node at 127.0.0.1:15682 [Candidate] entering Candidate state
2016/03/28 06:02:39 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:39 [DEBUG] raft: Vote granted from 127.0.0.1:15682. Tally: 1
2016/03/28 06:02:39 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:39 [INFO] raft: Node at 127.0.0.1:15682 [Leader] entering Leader state
2016/03/28 06:02:39 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:39 [INFO] consul: New leader elected: Node 15681
2016/03/28 06:02:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:39 [DEBUG] raft: Node 127.0.0.1:15682 updated peer set (2): [127.0.0.1:15682]
2016/03/28 06:02:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:39 [INFO] consul: member 'Node 15681' joined, marking health alive
2016/03/28 06:02:41 [INFO] consul: shutting down server
2016/03/28 06:02:41 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:41 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:02:41 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_DeleteApply (3.48s)
=== RUN   TestSessionEndpoint_Get
2016/03/28 06:02:41 [INFO] raft: Node at 127.0.0.1:15686 [Follower] entering Follower state
2016/03/28 06:02:41 [INFO] serf: EventMemberJoin: Node 15685 127.0.0.1
2016/03/28 06:02:41 [INFO] consul: adding LAN server Node 15685 (Addr: 127.0.0.1:15686) (DC: dc1)
2016/03/28 06:02:41 [INFO] serf: EventMemberJoin: Node 15685.dc1 127.0.0.1
2016/03/28 06:02:41 [INFO] consul: adding WAN server Node 15685.dc1 (Addr: 127.0.0.1:15686) (DC: dc1)
2016/03/28 06:02:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:42 [INFO] raft: Node at 127.0.0.1:15686 [Candidate] entering Candidate state
2016/03/28 06:02:42 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:42 [DEBUG] raft: Vote granted from 127.0.0.1:15686. Tally: 1
2016/03/28 06:02:42 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:42 [INFO] raft: Node at 127.0.0.1:15686 [Leader] entering Leader state
2016/03/28 06:02:42 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:42 [INFO] consul: New leader elected: Node 15685
2016/03/28 06:02:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:43 [DEBUG] raft: Node 127.0.0.1:15686 updated peer set (2): [127.0.0.1:15686]
2016/03/28 06:02:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:43 [INFO] consul: member 'Node 15685' joined, marking health alive
2016/03/28 06:02:44 [INFO] consul: shutting down server
2016/03/28 06:02:44 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:44 [WARN] serf: Shutdown without a Leave
--- PASS: TestSessionEndpoint_Get (2.86s)
=== RUN   TestSessionEndpoint_List
2016/03/28 06:02:44 [INFO] raft: Node at 127.0.0.1:15690 [Follower] entering Follower state
2016/03/28 06:02:44 [INFO] serf: EventMemberJoin: Node 15689 127.0.0.1
2016/03/28 06:02:44 [INFO] consul: adding LAN server Node 15689 (Addr: 127.0.0.1:15690) (DC: dc1)
2016/03/28 06:02:44 [INFO] serf: EventMemberJoin: Node 15689.dc1 127.0.0.1
2016/03/28 06:02:44 [INFO] consul: adding WAN server Node 15689.dc1 (Addr: 127.0.0.1:15690) (DC: dc1)
2016/03/28 06:02:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:44 [INFO] raft: Node at 127.0.0.1:15690 [Candidate] entering Candidate state
2016/03/28 06:02:45 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:45 [DEBUG] raft: Vote granted from 127.0.0.1:15690. Tally: 1
2016/03/28 06:02:45 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:45 [INFO] raft: Node at 127.0.0.1:15690 [Leader] entering Leader state
2016/03/28 06:02:45 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:45 [INFO] consul: New leader elected: Node 15689
2016/03/28 06:02:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:45 [DEBUG] raft: Node 127.0.0.1:15690 updated peer set (2): [127.0.0.1:15690]
2016/03/28 06:02:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:45 [INFO] consul: member 'Node 15689' joined, marking health alive
2016/03/28 06:02:47 [INFO] consul: shutting down server
2016/03/28 06:02:47 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:47 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:47 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_List (3.50s)
=== RUN   TestSessionEndpoint_ApplyTimers
2016/03/28 06:02:48 [INFO] raft: Node at 127.0.0.1:15694 [Follower] entering Follower state
2016/03/28 06:02:48 [INFO] serf: EventMemberJoin: Node 15693 127.0.0.1
2016/03/28 06:02:48 [INFO] consul: adding LAN server Node 15693 (Addr: 127.0.0.1:15694) (DC: dc1)
2016/03/28 06:02:48 [INFO] serf: EventMemberJoin: Node 15693.dc1 127.0.0.1
2016/03/28 06:02:48 [INFO] consul: adding WAN server Node 15693.dc1 (Addr: 127.0.0.1:15694) (DC: dc1)
2016/03/28 06:02:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:48 [INFO] raft: Node at 127.0.0.1:15694 [Candidate] entering Candidate state
2016/03/28 06:02:49 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:49 [DEBUG] raft: Vote granted from 127.0.0.1:15694. Tally: 1
2016/03/28 06:02:49 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:49 [INFO] raft: Node at 127.0.0.1:15694 [Leader] entering Leader state
2016/03/28 06:02:49 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:49 [INFO] consul: New leader elected: Node 15693
2016/03/28 06:02:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:49 [DEBUG] raft: Node 127.0.0.1:15694 updated peer set (2): [127.0.0.1:15694]
2016/03/28 06:02:49 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:49 [INFO] consul: member 'Node 15693' joined, marking health alive
2016/03/28 06:02:50 [INFO] consul: shutting down server
2016/03/28 06:02:50 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:50 [WARN] serf: Shutdown without a Leave
2016/03/28 06:02:50 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/28 06:02:50 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_ApplyTimers (2.91s)
=== RUN   TestSessionEndpoint_Renew
2016/03/28 06:02:51 [INFO] raft: Node at 127.0.0.1:15698 [Follower] entering Follower state
2016/03/28 06:02:51 [INFO] serf: EventMemberJoin: Node 15697 127.0.0.1
2016/03/28 06:02:51 [INFO] consul: adding LAN server Node 15697 (Addr: 127.0.0.1:15698) (DC: dc1)
2016/03/28 06:02:51 [INFO] serf: EventMemberJoin: Node 15697.dc1 127.0.0.1
2016/03/28 06:02:51 [INFO] consul: adding WAN server Node 15697.dc1 (Addr: 127.0.0.1:15698) (DC: dc1)
2016/03/28 06:02:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/28 06:02:51 [INFO] raft: Node at 127.0.0.1:15698 [Candidate] entering Candidate state
2016/03/28 06:02:51 [DEBUG] raft: Votes needed: 1
2016/03/28 06:02:51 [DEBUG] raft: Vote granted from 127.0.0.1:15698. Tally: 1
2016/03/28 06:02:51 [INFO] raft: Election won. Tally: 1
2016/03/28 06:02:51 [INFO] raft: Node at 127.0.0.1:15698 [Leader] entering Leader state
2016/03/28 06:02:51 [INFO] consul: cluster leadership acquired
2016/03/28 06:02:51 [INFO] consul: New leader elected: Node 15697
2016/03/28 06:02:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/28 06:02:51 [DEBUG] raft: Node 127.0.0.1:15698 updated peer set (2): [127.0.0.1:15698]
2016/03/28 06:02:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/28 06:02:52 [INFO] consul: member 'Node 15697' joined, marking health alive
SIGQUIT: quit
PC=0x731c8 m=0

goroutine 0 [idle]:
runtime.futex(0x916ef4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1f6d0, 0x0, 0x0, 0x0, ...)
	/usr/lib/go/src/runtime/sys_linux_arm.s:246 +0x1c
runtime.futexsleep(0x916ef4, 0x0, 0xffffffff, 0xffffffff)
	/usr/lib/go/src/runtime/os1_linux.go:40 +0x68
runtime.notesleep(0x916ef4)
	/usr/lib/go/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
	/usr/lib/go/src/runtime/proc.go:1535 +0x100
runtime.findrunnable(0x10b1c000, 0x0)
	/usr/lib/go/src/runtime/proc.go:1973 +0x7c8
runtime.schedule()
	/usr/lib/go/src/runtime/proc.go:2072 +0x26c
runtime.park_m(0x10ef1ad0)
	/usr/lib/go/src/runtime/proc.go:2137 +0x16c
runtime.mcall(0x916900)
	/usr/lib/go/src/runtime/asm_arm.s:183 +0x5c

goroutine 1 [chan receive]:
testing.RunTests(0x75580c, 0x914d90, 0xbd, 0xbd, 0x0)
	/usr/lib/go/src/testing/testing.go:583 +0x62c
testing.(*M).Run(0x10b35f7c, 0x4)
	/usr/lib/go/src/testing/testing.go:515 +0x8c
main.main()
	github.com/hashicorp/consul/consul/_test/_testmain.go:432 +0x118

goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
	/usr/lib/go/src/runtime/asm_arm.s:990 +0x4

goroutine 5 [syscall, 9 minutes]:
os/signal.signal_recv(0x0)
	/usr/lib/go/src/runtime/sigqueue.go:116 +0x190
os/signal.loop()
	/usr/lib/go/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.1
	/usr/lib/go/src/os/signal/signal_unix.go:28 +0x30

goroutine 8405 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x10df4300)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 8431 [IO wait]:
net.runtime_pollWait(0xb649a010, 0x72, 0x0)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10b529b8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10b529b8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x10b52980, 0x11296000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6495030, 0x10b0a07c)
	/usr/lib/go/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10b27758, 0x11296000, 0x10000, 0x10000, 0x5487b0, 0x10000, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10b27758, 0x11296000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10e8abd0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:136 +0xd18

goroutine 8472 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe000, 0x6473a8, 0x5, 0x10dcf180)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 8428 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10dcf0a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 8409 [IO wait]:
net.runtime_pollWait(0xb649a3d0, 0x72, 0x0)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10e6d0f8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10e6d0f8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10e6d0c0, 0x0, 0xb540d090, 0x10f11d40)
	/usr/lib/go/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10edee80, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10fa1560)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:135 +0xcfc

goroutine 8433 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e8abd0, 0x5f5e100, 0x0, 0x10b52dc0, 0x10b52d00, 0x10b27d08)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 8408 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10c06800)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 8415 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10cbe140)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 8471 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe000, 0x646438, 0x5, 0x10dcf160)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 2248 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6499b60, 0x72, 0x10f21000)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10d5e7b8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10d5e7b8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10d5e780, 0x10f21000, 0x1000, 0x1000, 0x0, 0xb6495030, 0x10b0a07c)
	/usr/lib/go/src/net/fd_unix.go:250 +0x1c4
net.(*conn).Read(0x10b27008, 0x10f21000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:172 +0xc8
bufio.(*Reader).fill(0x10df3b30)
	/usr/lib/go/src/bufio/bufio.go:97 +0x1c4
bufio.(*Reader).ReadByte(0x10df3b30, 0x0, 0x0, 0x0)
	/usr/lib/go/src/bufio/bufio.go:229 +0x8c
github.com/hashicorp/raft.(*NetworkTransport).handleCommand(0x10f09180, 0x10df3b30, 0x10df3b90, 0x10d5e800, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:402 +0x30
github.com/hashicorp/raft.(*NetworkTransport).handleConn(0x10f09180, 0xb649a9a0, 0x10b27008)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:386 +0x364
created by github.com/hashicorp/raft.(*NetworkTransport).listen
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:373 +0x2b8

goroutine 8410 [IO wait]:
net.runtime_pollWait(0xb649a088, 0x72, 0x0)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10e6d278, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10e6d278, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x10e6d240, 0x112a6000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6495030, 0x10b0a07c)
	/usr/lib/go/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10edee88, 0x112a6000, 0x10000, 0x10000, 0x5487b0, 0x10000, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10edee88, 0x112a6000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10fa1560)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:136 +0xd18

goroutine 8474 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10dcf6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 8426 [select]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x10c5c5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10c5c5a0, 0x10b27698)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 6710 [chan receive, 2 minutes]:
github.com/hashicorp/raft.(*deferError).Error(0x10d14720, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10b5e620, 0x10f89d80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8483 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe140, 0x6473a8, 0x5, 0x10c52aa0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 8498 [IO wait]:
net.runtime_pollWait(0xb6499c50, 0x72, 0x10bb9000)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f167f8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f167f8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10f167c0, 0x10bb9000, 0x1000, 0x1000, 0x0, 0xb6495030, 0x10b0a07c)
	/usr/lib/go/src/net/fd_unix.go:250 +0x1c4
net.(*conn).Read(0x10c63ac0, 0x10bb9000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:172 +0xc8
bufio.(*Reader).fill(0x10ef4c60)
	/usr/lib/go/src/bufio/bufio.go:97 +0x1c4
bufio.(*Reader).ReadByte(0x10ef4c60, 0x5dbf28, 0x0, 0x0)
	/usr/lib/go/src/bufio/bufio.go:229 +0x8c
github.com/hashicorp/go-msgpack/codec.(*ioDecReader).readn1(0x10c7a900, 0x10b34e54)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:90 +0x48
github.com/hashicorp/go-msgpack/codec.(*msgpackDecDriver).initReadNext(0x10eeef00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/msgpack.go:540 +0x44
github.com/hashicorp/go-msgpack/codec.(*Decoder).decode(0x10ef4c90, 0x538320, 0x10c7a940)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:635 +0x54
github.com/hashicorp/go-msgpack/codec.(*Decoder).Decode(0x10ef4c90, 0x538320, 0x10c7a940, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:630 +0x74
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).read(0x10ef4c30, 0x538320, 0x10c7a940, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:121 +0xbc
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).ReadRequestHeader(0x10ef4c30, 0x10c7a940, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:60 +0x40
net/rpc.(*Server).readRequestHeader(0x10e6c000, 0xb30c03d8, 0x10ef4c30, 0x0, 0x0, 0x10c7a940, 0x10edeb00, 0x0, 0x0)
	/usr/lib/go/src/net/rpc/server.go:576 +0x80
net/rpc.(*Server).readRequest(0x10e6c000, 0xb30c03d8, 0x10ef4c30, 0x15df0, 0x10e6c150, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/usr/lib/go/src/net/rpc/server.go:543 +0x84
net/rpc.(*Server).ServeRequest(0x10e6c000, 0xb30c03d8, 0x10ef4c30, 0x0, 0x0)
	/usr/lib/go/src/net/rpc/server.go:486 +0x4c
github.com/hashicorp/consul/consul.(*Server).handleConsulConn(0x10b5e1c0, 0xb649a9a0, 0x10c63ac0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:177 +0xfc
github.com/hashicorp/consul/consul.(*Server).handleConn(0x10b5e1c0, 0xb649a9a0, 0x10c63ac0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:102 +0x3e4
created by github.com/hashicorp/consul/consul.(*Server).listen
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:68 +0x164

goroutine 8417 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe140, 0x646988, 0x6, 0x10c52a00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 2239 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb649a178, 0x72, 0x10efc000)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10fc8678, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10fc8678, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10fc8640, 0x10efc000, 0x1000, 0x1000, 0x0, 0xb6495030, 0x10b0a07c)
	/usr/lib/go/src/net/fd_unix.go:250 +0x1c4
net.(*conn).Read(0x10b26168, 0x10efc000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:172 +0xc8
bufio.(*Reader).fill(0x10dfe180)
	/usr/lib/go/src/bufio/bufio.go:97 +0x1c4
bufio.(*Reader).ReadByte(0x10dfe180, 0x0, 0x0, 0x0)
	/usr/lib/go/src/bufio/bufio.go:229 +0x8c
github.com/hashicorp/raft.(*NetworkTransport).handleCommand(0x10f08280, 0x10dfe180, 0x10dfe1e0, 0x10fc86c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:402 +0x30
github.com/hashicorp/raft.(*NetworkTransport).handleConn(0x10f08280, 0xb649a9a0, 0x10b26168)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:386 +0x364
created by github.com/hashicorp/raft.(*NetworkTransport).listen
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:373 +0x2b8

goroutine 7271 [chan receive, 1 minutes]:
github.com/hashicorp/raft.(*deferError).Error(0x10e3e4e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10b5f420, 0x10ee6b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8456 [chan receive]:
github.com/hashicorp/raft.(*deferError).Error(0x10d14c60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10b5e1c0, 0x10d25a00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8416 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10cbe140)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 8407 [select]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x10f0cb00, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x10c03bd0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 8427 [select]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10b5e1c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 1872 [chan receive, 7 minutes]:
github.com/hashicorp/raft.(*deferError).Error(0x10c32f00, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10dee1c0, 0x10e4f3c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8406 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x10f252e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 8485 [IO wait]:
net.runtime_pollWait(0xb6499a70, 0x72, 0x0)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10e6c1f8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10e6c1f8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10e6c1c0, 0x0, 0xb540d090, 0x10eeeef0)
	/usr/lib/go/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10edebf8, 0xb8570, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10edebf8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10b5e1c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:59 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 8469 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10cbe000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4565 [chan receive, 5 minutes]:
github.com/hashicorp/raft.(*deferError).Error(0x10e3eb40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10d902a0, 0x10ee70c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8304 [chan receive]:
github.com/hashicorp/raft.(*deferError).Error(0x10d142a0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10f54540, 0x10fc83c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8468 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10cbe000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 8414 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10fa1560, 0x5f5e100, 0x0, 0x10e6c780, 0x10e6da40, 0x10edf008)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 8484 [select]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10b5e1c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 8430 [IO wait]:
net.runtime_pollWait(0xb649a1f0, 0x72, 0x0)
	/usr/lib/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10b52978, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10b52978, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10b52940, 0x0, 0xb540d090, 0x10eef650)
	/usr/lib/go/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10b27750, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10e8abd0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:135 +0xcfc

goroutine 8413 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10fa1560, 0x10e6da40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 8467 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e8abd0, 0x5f5e100, 0x0, 0x10b52e40, 0x10b52d00, 0x10b27d18)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 8486 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10b5e1c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 8425 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x10c5c5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10c5c5a0, 0x10b27688)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 8466 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10e8abd0, 0x10b52d00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 8424 [syscall]:
syscall.Syscall(0x94, 0x5, 0x0, 0x0, 0x0, 0xfffefff, 0xb000)
	/usr/lib/go/src/syscall/asm_linux_arm.s:17 +0x8
syscall.Fdatasync(0x5, 0x0, 0x0)
	/usr/lib/go/src/syscall/zsyscall_linux_arm.go:472 +0x44
github.com/boltdb/bolt.fdatasync(0x10c40400, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/bolt_linux.go:11 +0x40
github.com/boltdb/bolt.(*Tx).write(0x10c06780, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:469 +0x45c
github.com/boltdb/bolt.(*Tx).Commit(0x10c06780, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:179 +0x434
github.com/hashicorp/raft-boltdb.(*BoltStore).StoreLogs(0x10eeefa0, 0x10c62ed0, 0x1, 0x1, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:157 +0x484
github.com/hashicorp/raft.(*LogCache).StoreLogs(0x10ee2300, 0x10c62ed0, 0x1, 0x1, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/log_cache.go:61 +0x1bc
github.com/hashicorp/raft.(*Raft).dispatchLogs(0x10c5c5a0, 0x10cedcbc, 0x1, 0x1)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1115 +0x40c
github.com/hashicorp/raft.(*Raft).leaderLoop(0x10c5c5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:972 +0xbf8
github.com/hashicorp/raft.(*Raft).runLeader(0x10c5c5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:838 +0x8a0
github.com/hashicorp/raft.(*Raft).run(0x10c5c5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:602 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10c5c5a0, 0x10b27678)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 8411 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10fa1560)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:137 +0xd34

goroutine 8412 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10fa1560, 0x5f5e100, 0x0, 0x10e6da80, 0x10e6da40, 0x10edeff8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 8404 [sleep]:
time.Sleep(0x1aba8555, 0x3)
	/usr/lib/go/src/runtime/time.go:59 +0x104
github.com/hashicorp/consul/consul.TestSessionEndpoint_Renew(0x10c330e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_endpoint_test.go:367 +0x156c
testing.tRunner(0x10c330e0, 0x915540)
	/usr/lib/go/src/testing/testing.go:473 +0xa8
created by testing.RunTests
	/usr/lib/go/src/testing/testing.go:582 +0x600

goroutine 8482 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe140, 0x646438, 0x5, 0x10c52a20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 8432 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10e8abd0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:137 +0xd34

goroutine 8473 [select]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10b5e1c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 8429 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10cc6b00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 8470 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10cbe000, 0x646988, 0x6, 0x10dcf140)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

trap    0x6
error   0x0
oldmask 0x0
r0      0x916ef4
r1      0x0
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x146866
r7      0xf0
r8      0x122cfb
r9      0x0
r10     0x916938
fp      0x9167ec
ip      0x8
sp      0xbeedd3dc
lr      0x3d864
pc      0x731c8
cpsr    0xa0000010
fault   0x0
*** Test killed with quit: ran too long (10m0s).
FAIL	github.com/hashicorp/consul/consul	600.109s
=== RUN   TestDelay
--- PASS: TestDelay (0.50s)
=== RUN   TestGraveyard_Lifecycle
--- PASS: TestGraveyard_Lifecycle (0.00s)
=== RUN   TestGraveyard_GC_Trigger
--- PASS: TestGraveyard_GC_Trigger (0.11s)
=== RUN   TestGraveyard_Snapshot_Restore
--- PASS: TestGraveyard_Snapshot_Restore (0.00s)
=== RUN   TestNotifyGroup
--- PASS: TestNotifyGroup (0.00s)
=== RUN   TestNotifyGroup_Clear
--- PASS: TestNotifyGroup_Clear (0.00s)
=== RUN   TestStateStore_PreparedQuery_isUUID
--- PASS: TestStateStore_PreparedQuery_isUUID (0.01s)
=== RUN   TestStateStore_PreparedQuerySet_PreparedQueryGet
--- PASS: TestStateStore_PreparedQuerySet_PreparedQueryGet (0.00s)
=== RUN   TestStateStore_PreparedQueryDelete
--- PASS: TestStateStore_PreparedQueryDelete (0.00s)
=== RUN   TestStateStore_PreparedQueryLookup
--- PASS: TestStateStore_PreparedQueryLookup (0.00s)
=== RUN   TestStateStore_PreparedQueryList
--- PASS: TestStateStore_PreparedQueryList (0.00s)
=== RUN   TestStateStore_PreparedQuery_Snapshot_Restore
--- PASS: TestStateStore_PreparedQuery_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_PreparedQuery_Watches
--- PASS: TestStateStore_PreparedQuery_Watches (0.00s)
=== RUN   TestStateStore_Schema
--- PASS: TestStateStore_Schema (0.00s)
=== RUN   TestStateStore_Restore_Abort
--- PASS: TestStateStore_Restore_Abort (0.00s)
=== RUN   TestStateStore_maxIndex
--- PASS: TestStateStore_maxIndex (0.00s)
=== RUN   TestStateStore_indexUpdateMaxTxn
--- PASS: TestStateStore_indexUpdateMaxTxn (0.00s)
=== RUN   TestStateStore_GC
--- PASS: TestStateStore_GC (0.06s)
=== RUN   TestStateStore_ReapTombstones
--- PASS: TestStateStore_ReapTombstones (0.00s)
=== RUN   TestStateStore_GetWatches
--- PASS: TestStateStore_GetWatches (0.00s)
=== RUN   TestStateStore_EnsureRegistration
--- PASS: TestStateStore_EnsureRegistration (0.00s)
=== RUN   TestStateStore_EnsureRegistration_Restore
--- PASS: TestStateStore_EnsureRegistration_Restore (0.00s)
=== RUN   TestStateStore_EnsureRegistration_Watches
--- PASS: TestStateStore_EnsureRegistration_Watches (0.01s)
=== RUN   TestStateStore_EnsureNode
--- PASS: TestStateStore_EnsureNode (0.00s)
=== RUN   TestStateStore_GetNodes
--- PASS: TestStateStore_GetNodes (0.00s)
=== RUN   TestStateStore_DeleteNode
--- PASS: TestStateStore_DeleteNode (0.00s)
=== RUN   TestStateStore_Node_Snapshot
--- PASS: TestStateStore_Node_Snapshot (0.00s)
=== RUN   TestStateStore_Node_Watches
--- PASS: TestStateStore_Node_Watches (0.00s)
=== RUN   TestStateStore_EnsureService
--- PASS: TestStateStore_EnsureService (0.00s)
=== RUN   TestStateStore_Services
--- PASS: TestStateStore_Services (0.00s)
=== RUN   TestStateStore_ServiceNodes
--- PASS: TestStateStore_ServiceNodes (0.00s)
=== RUN   TestStateStore_ServiceTagNodes
--- PASS: TestStateStore_ServiceTagNodes (0.00s)
=== RUN   TestStateStore_ServiceTagNodes_MultipleTags
--- PASS: TestStateStore_ServiceTagNodes_MultipleTags (0.00s)
=== RUN   TestStateStore_DeleteService
--- PASS: TestStateStore_DeleteService (0.00s)
=== RUN   TestStateStore_Service_Snapshot
--- PASS: TestStateStore_Service_Snapshot (0.00s)
=== RUN   TestStateStore_Service_Watches
--- PASS: TestStateStore_Service_Watches (0.00s)
=== RUN   TestStateStore_EnsureCheck
--- PASS: TestStateStore_EnsureCheck (0.00s)
=== RUN   TestStateStore_EnsureCheck_defaultStatus
--- PASS: TestStateStore_EnsureCheck_defaultStatus (0.00s)
=== RUN   TestStateStore_NodeChecks
--- PASS: TestStateStore_NodeChecks (0.00s)
=== RUN   TestStateStore_ServiceChecks
--- PASS: TestStateStore_ServiceChecks (0.00s)
=== RUN   TestStateStore_ChecksInState
--- PASS: TestStateStore_ChecksInState (0.00s)
=== RUN   TestStateStore_DeleteCheck
--- PASS: TestStateStore_DeleteCheck (0.00s)
=== RUN   TestStateStore_CheckServiceNodes
--- PASS: TestStateStore_CheckServiceNodes (0.01s)
=== RUN   TestStateStore_CheckServiceTagNodes
--- PASS: TestStateStore_CheckServiceTagNodes (0.00s)
=== RUN   TestStateStore_Check_Snapshot
--- PASS: TestStateStore_Check_Snapshot (0.01s)
=== RUN   TestStateStore_Check_Watches
--- PASS: TestStateStore_Check_Watches (0.00s)
=== RUN   TestStateStore_NodeInfo_NodeDump
--- PASS: TestStateStore_NodeInfo_NodeDump (0.01s)
=== RUN   TestStateStore_KVSSet_KVSGet
--- PASS: TestStateStore_KVSSet_KVSGet (0.00s)
=== RUN   TestStateStore_KVSList
--- PASS: TestStateStore_KVSList (0.00s)
=== RUN   TestStateStore_KVSListKeys
--- PASS: TestStateStore_KVSListKeys (0.00s)
=== RUN   TestStateStore_KVSDelete
--- PASS: TestStateStore_KVSDelete (0.00s)
=== RUN   TestStateStore_KVSDeleteCAS
--- PASS: TestStateStore_KVSDeleteCAS (0.00s)
=== RUN   TestStateStore_KVSSetCAS
--- PASS: TestStateStore_KVSSetCAS (0.00s)
=== RUN   TestStateStore_KVSDeleteTree
--- PASS: TestStateStore_KVSDeleteTree (0.00s)
=== RUN   TestStateStore_KVSLockDelay
--- PASS: TestStateStore_KVSLockDelay (0.00s)
=== RUN   TestStateStore_KVSLock
--- PASS: TestStateStore_KVSLock (0.00s)
=== RUN   TestStateStore_KVSUnlock
--- PASS: TestStateStore_KVSUnlock (0.00s)
=== RUN   TestStateStore_KVS_Snapshot_Restore
--- PASS: TestStateStore_KVS_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_KVS_Watches
--- PASS: TestStateStore_KVS_Watches (0.01s)
=== RUN   TestStateStore_Tombstone_Snapshot_Restore
--- PASS: TestStateStore_Tombstone_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_SessionCreate_SessionGet
--- PASS: TestStateStore_SessionCreate_SessionGet (0.01s)
=== RUN   TestStateStore_NodeSessions
--- PASS: TestStateStore_NodeSessions (0.00s)
=== RUN   TestStateStore_SessionDestroy
--- PASS: TestStateStore_SessionDestroy (0.00s)
=== RUN   TestStateStore_Session_Snapshot_Restore
--- PASS: TestStateStore_Session_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_Session_Watches
--- PASS: TestStateStore_Session_Watches (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteNode
--- PASS: TestStateStore_Session_Invalidate_DeleteNode (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteService
--- PASS: TestStateStore_Session_Invalidate_DeleteService (0.01s)
=== RUN   TestStateStore_Session_Invalidate_Critical_Check
--- PASS: TestStateStore_Session_Invalidate_Critical_Check (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteCheck
--- PASS: TestStateStore_Session_Invalidate_DeleteCheck (0.00s)
=== RUN   TestStateStore_Session_Invalidate_Key_Unlock_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Unlock_Behavior (0.00s)
=== RUN   TestStateStore_Session_Invalidate_Key_Delete_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Delete_Behavior (0.00s)
=== RUN   TestStateStore_Session_Invalidate_PreparedQuery_Delete
--- PASS: TestStateStore_Session_Invalidate_PreparedQuery_Delete (0.07s)
=== RUN   TestStateStore_ACLSet_ACLGet
--- PASS: TestStateStore_ACLSet_ACLGet (0.00s)
=== RUN   TestStateStore_ACLList
--- PASS: TestStateStore_ACLList (0.00s)
=== RUN   TestStateStore_ACLDelete
--- PASS: TestStateStore_ACLDelete (0.00s)
=== RUN   TestStateStore_ACL_Snapshot_Restore
--- PASS: TestStateStore_ACL_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_ACL_Watches
--- PASS: TestStateStore_ACL_Watches (0.00s)
=== RUN   TestStateStore_Coordinate_Updates
--- PASS: TestStateStore_Coordinate_Updates (0.01s)
=== RUN   TestStateStore_Coordinate_Cleanup
--- PASS: TestStateStore_Coordinate_Cleanup (0.00s)
=== RUN   TestStateStore_Coordinate_Snapshot_Restore
--- PASS: TestStateStore_Coordinate_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_Coordinate_Watches
--- PASS: TestStateStore_Coordinate_Watches (0.00s)
=== RUN   TestTombstoneGC_invalid
--- PASS: TestTombstoneGC_invalid (0.00s)
=== RUN   TestTombstoneGC
--- PASS: TestTombstoneGC (0.03s)
=== RUN   TestTombstoneGC_Expire
--- PASS: TestTombstoneGC_Expire (0.02s)
=== RUN   TestWatch_FullTableWatch
--- PASS: TestWatch_FullTableWatch (0.00s)
=== RUN   TestWatch_DumbWatchManager
--- PASS: TestWatch_DumbWatchManager (0.00s)
=== RUN   TestWatch_PrefixWatch
--- PASS: TestWatch_PrefixWatch (0.00s)
=== RUN   TestWatch_MultiWatch
--- PASS: TestWatch_MultiWatch (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/state	1.149s
=== RUN   TestEncodeDecode
--- PASS: TestEncodeDecode (0.00s)
=== RUN   TestStructs_Implements
--- PASS: TestStructs_Implements (0.00s)
=== RUN   TestStructs_ServiceNode_Clone
--- PASS: TestStructs_ServiceNode_Clone (0.00s)
=== RUN   TestStructs_ServiceNode_Conversions
--- PASS: TestStructs_ServiceNode_Conversions (0.00s)
=== RUN   TestStructs_NodeService_IsSame
--- PASS: TestStructs_NodeService_IsSame (0.00s)
=== RUN   TestStructs_HealthCheck_IsSame
--- PASS: TestStructs_HealthCheck_IsSame (0.00s)
=== RUN   TestStructs_CheckServiceNodes_Shuffle
--- PASS: TestStructs_CheckServiceNodes_Shuffle (0.02s)
=== RUN   TestStructs_CheckServiceNodes_Filter
--- PASS: TestStructs_CheckServiceNodes_Filter (0.00s)
=== RUN   TestStructs_DirEntry_Clone
--- PASS: TestStructs_DirEntry_Clone (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/structs	0.074s
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul/testutil	0.059s
=== RUN   TestConfig_AppendCA_None
--- PASS: TestConfig_AppendCA_None (0.00s)
=== RUN   TestConfig_CACertificate_Valid
--- PASS: TestConfig_CACertificate_Valid (0.00s)
=== RUN   TestConfig_KeyPair_None
--- PASS: TestConfig_KeyPair_None (0.00s)
=== RUN   TestConfig_KeyPair_Valid
--- PASS: TestConfig_KeyPair_Valid (0.01s)
=== RUN   TestConfig_OutgoingTLS_MissingCA
--- PASS: TestConfig_OutgoingTLS_MissingCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_OnlyCA
--- PASS: TestConfig_OutgoingTLS_OnlyCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyOutgoing
--- PASS: TestConfig_OutgoingTLS_VerifyOutgoing (0.00s)
=== RUN   TestConfig_OutgoingTLS_ServerName
--- PASS: TestConfig_OutgoingTLS_ServerName (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyHostname
--- PASS: TestConfig_OutgoingTLS_VerifyHostname (0.00s)
=== RUN   TestConfig_OutgoingTLS_WithKeyPair
--- PASS: TestConfig_OutgoingTLS_WithKeyPair (0.01s)
=== RUN   TestConfig_outgoingWrapper_OK
--- PASS: TestConfig_outgoingWrapper_OK (0.32s)
=== RUN   TestConfig_outgoingWrapper_BadDC
--- PASS: TestConfig_outgoingWrapper_BadDC (0.18s)
=== RUN   TestConfig_outgoingWrapper_BadCert
--- PASS: TestConfig_outgoingWrapper_BadCert (0.06s)
=== RUN   TestConfig_wrapTLS_OK
--- PASS: TestConfig_wrapTLS_OK (0.17s)
=== RUN   TestConfig_wrapTLS_BadCert
--- PASS: TestConfig_wrapTLS_BadCert (0.27s)
=== RUN   TestConfig_IncomingTLS
--- PASS: TestConfig_IncomingTLS (0.01s)
=== RUN   TestConfig_IncomingTLS_MissingCA
--- PASS: TestConfig_IncomingTLS_MissingCA (0.02s)
=== RUN   TestConfig_IncomingTLS_MissingKey
--- PASS: TestConfig_IncomingTLS_MissingKey (0.00s)
=== RUN   TestConfig_IncomingTLS_NoVerify
--- PASS: TestConfig_IncomingTLS_NoVerify (0.00s)
PASS
ok  	github.com/hashicorp/consul/tlsutil	1.070s
=== RUN   TestKeyWatch
--- SKIP: TestKeyWatch (0.00s)
	funcs_test.go:19: 
=== RUN   TestKeyPrefixWatch
--- SKIP: TestKeyPrefixWatch (0.00s)
	funcs_test.go:73: 
=== RUN   TestServicesWatch
--- SKIP: TestServicesWatch (0.00s)
	funcs_test.go:129: 
=== RUN   TestNodesWatch
--- SKIP: TestNodesWatch (0.00s)
	funcs_test.go:172: 
=== RUN   TestServiceWatch
--- SKIP: TestServiceWatch (0.00s)
	funcs_test.go:221: 
=== RUN   TestChecksWatch_State
--- SKIP: TestChecksWatch_State (0.00s)
	funcs_test.go:270: 
=== RUN   TestChecksWatch_Service
--- SKIP: TestChecksWatch_Service (0.00s)
	funcs_test.go:330: 
=== RUN   TestEventWatch
--- SKIP: TestEventWatch (0.00s)
	funcs_test.go:398: 
=== RUN   TestRun_Stop
--- PASS: TestRun_Stop (0.01s)
=== RUN   TestParseBasic
--- PASS: TestParseBasic (0.00s)
=== RUN   TestParse_exempt
--- PASS: TestParse_exempt (0.00s)
PASS
ok  	github.com/hashicorp/consul/watch	0.051s
dh_auto_test: go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch returned exit code 2
debian/rules:7: recipe for target 'override_dh_auto_test' failed
make[1]: [override_dh_auto_test] Error 2 (ignored)
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
 fakeroot debian/rules binary-arch
dh binary-arch --buildsystem=golang --with=golang
   dh_testroot -a -O--buildsystem=golang
   dh_prep -a -O--buildsystem=golang
   dh_auto_install -a -O--buildsystem=golang
	mkdir -p /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr
	cp -r bin /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr
	mkdir -p /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
	cp -r -T src/github.com/hashicorp/consul /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
   dh_install -a -O--buildsystem=golang
   dh_installdocs -a -O--buildsystem=golang
   dh_installchangelogs -a -O--buildsystem=golang
   dh_perl -a -O--buildsystem=golang
   dh_link -a -O--buildsystem=golang
   dh_strip_nondeterminism -a -O--buildsystem=golang
   dh_compress -a -O--buildsystem=golang
   dh_fixperms -a -O--buildsystem=golang
   dh_strip -a -O--buildsystem=golang
   dh_makeshlibs -a -O--buildsystem=golang
   dh_shlibdeps -a -O--buildsystem=golang
   dh_installdeb -a -O--buildsystem=golang
   dh_golang -a -O--buildsystem=golang
   dh_gencontrol -a -O--buildsystem=golang
dpkg-gencontrol: warning: File::FcntlLock not available; using flock which is not NFS-safe
   dh_md5sums -a -O--buildsystem=golang
   dh_builddeb -u-Zxz -a -O--buildsystem=golang
dpkg-deb: building package 'consul' in '../consul_0.6.3~dfsg-2_armhf.deb'.
 dpkg-genchanges -B -mRaspbian nitrogen6x test autobuilder <root@raspbian.org> >../consul_0.6.3~dfsg-2_armhf.changes
dpkg-genchanges: binary-only arch-specific upload (source code and arch-indep packages not included)
 dpkg-source --after-build consul-0.6.3~dfsg
dpkg-buildpackage: binary-only upload (no source included)
--------------------------------------------------------------------------------
Build finished at 20160328-0604

Finished
--------

I: Built successfully

+------------------------------------------------------------------------------+
| Post Build Chroot                                                            |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Changes                                                                      |
+------------------------------------------------------------------------------+


consul_0.6.3~dfsg-2_armhf.changes:
----------------------------------

Format: 1.8
Date: Wed, 23 Mar 2016 16:41:11 +1100
Source: consul
Binary: golang-github-hashicorp-consul-dev consul
Architecture: armhf
Version: 0.6.3~dfsg-2
Distribution: stretch-staging
Urgency: medium
Maintainer: Raspbian nitrogen6x test autobuilder <root@raspbian.org>
Changed-By: Dmitry Smirnov <onlyjob@debian.org>
Description:
 consul     - tool for service discovery, monitoring and configuration
 golang-github-hashicorp-consul-dev - tool for service discovery, monitoring and configuration (source)
Changes:
 consul (0.6.3~dfsg-2) unstable; urgency=medium
 .
   * Drop unused "consul-migrate".
   * (Build-)Depends:
     - golang-github-hashicorp-consul-migrate-dev
   * Standards-Version: 3.9.7.
   * Added myself to Uploaders.
Checksums-Sha1:
 c50fded7a6effb91874e4efc24489a9571d22d49 2841374 consul_0.6.3~dfsg-2_armhf.deb
Checksums-Sha256:
 4716aac1fb088e080964eefa76ddf2282f6fb344425af723e3e037d11bf4d3cb 2841374 consul_0.6.3~dfsg-2_armhf.deb
Files:
 0bd73939b3d36c17a228f1caa06e7711 2841374 devel extra consul_0.6.3~dfsg-2_armhf.deb

+------------------------------------------------------------------------------+
| Package contents                                                             |
+------------------------------------------------------------------------------+


consul_0.6.3~dfsg-2_armhf.deb
-----------------------------

 new debian package, version 2.0.
 size 2841374 bytes: control archive=1582 bytes.
    3048 bytes,    34 lines      control              
     257 bytes,     4 lines      md5sums              
 Package: consul
 Version: 0.6.3~dfsg-2
 Architecture: armhf
 Maintainer: Debian Go Packaging Team <pkg-go-maintainers@lists.alioth.debian.org>
 Installed-Size: 11553
 Depends: libc6 (>= 2.4)
 Built-Using: golang (= 2:1.6-1+rpi1), golang-dns (= 0.0~git20151030.0.6a15566-1), golang-github-armon-circbuf (= 0.0~git20150827.0.bbbad09-1), golang-github-armon-go-metrics (= 0.0~git20151207.0.06b6099-1), golang-github-armon-go-radix (= 0.0~git20150602.0.fbd82e8-1), golang-github-armon-gomdb (= 0.0~git20150106.0.151f2e0-1), golang-github-elazarl-go-bindata-assetfs (= 0.0~git20151224.0.57eb5e1-1), golang-github-fsouza-go-dockerclient (= 0.0+git20160316-1), golang-github-hashicorp-go-checkpoint (= 0.0~git20151022.0.e4b2dc3-1), golang-github-hashicorp-go-memdb (= 0.0~git20160301.0.98f52f5-1), golang-github-hashicorp-go-msgpack (= 0.0~git20150518-1), golang-github-hashicorp-go-reap (= 0.0~git20160113.0.2d85522-1), golang-github-hashicorp-go-syslog (= 0.0~git20150218.0.42a2b57-1), golang-github-hashicorp-golang-lru (= 0.0~git20160207.0.a0d98a5-1), golang-github-hashicorp-hcl (= 0.0~git20151110.0.fa160f1-1), golang-github-hashicorp-logutils (= 0.0~git20150609.0.0dc08b1-1), golang-github-hashicorp-memberlist (= 0.0~git20160225.0.ae9a8d9-1), golang-github-hashicorp-raft (= 0.0~git20160317.0.3359516-1), golang-github-hashicorp-raft-boltdb (= 0.0~git20150201.d1e82c1-1), golang-github-hashicorp-scada-client (= 0.0~git20150828.0.84989fd-1), golang-github-hashicorp-serf (= 0.7.0~ds1-1), golang-github-hashicorp-yamux (= 0.0~git20151129.0.df94978-1), golang-github-inconshreveable-muxado (= 0.0~git20140312.0.f693c7e-1), golang-github-mitchellh-cli (= 0.0~git20160203.0.5c87c51-1), golang-github-mitchellh-mapstructure (= 0.0~git20150717.0.281073e-2), golang-github-ryanuber-columnize (= 2.1.0-1)
 Section: devel
 Priority: extra
 Homepage: https://github.com/hashicorp/consul
 Description: tool for service discovery, monitoring and configuration
  Consul is a tool for service discovery and configuration. Consul is
  distributed, highly available, and extremely scalable.
  .
  Consul provides several key features:
  .
   - Service Discovery - Consul makes it simple for services to register
     themselves and to discover other services via a DNS or HTTP interface.
     External services such as SaaS providers can be registered as well.
  .
   - Health Checking - Health Checking enables Consul to quickly alert operators
     about any issues in a cluster. The integration with service discovery
     prevents routing traffic to unhealthy hosts and enables service level
     circuit breakers.
  .
   - Key/Value Storage - A flexible key/value store enables storing dynamic
     configuration, feature flagging, coordination, leader election and more. The
     simple HTTP API makes it easy to use anywhere.
  .
   - Multi-Datacenter - Consul is built to be datacenter aware, and can support
     any number of regions without complex configuration.
  .
  Consul runs on Linux, Mac OS X, and Windows. It is recommended to run the
  Consul servers only on Linux, however.

drwxr-xr-x root/root         0 2016-03-28 06:03 ./
drwxr-xr-x root/root         0 2016-03-28 06:03 ./usr/
drwxr-xr-x root/root         0 2016-03-28 06:03 ./usr/bin/
-rwxr-xr-x root/root  11794320 2016-03-28 06:03 ./usr/bin/consul
drwxr-xr-x root/root         0 2016-03-28 06:03 ./usr/share/
drwxr-xr-x root/root         0 2016-03-28 06:03 ./usr/share/doc/
drwxr-xr-x root/root         0 2016-03-28 06:03 ./usr/share/doc/consul/
-rw-r--r-- root/root       315 2016-03-23 05:47 ./usr/share/doc/consul/changelog.Debian.gz
-rw-r--r-- root/root      9218 2016-01-14 18:28 ./usr/share/doc/consul/changelog.gz
-rw-r--r-- root/root     16831 2016-03-23 05:40 ./usr/share/doc/consul/copyright


+------------------------------------------------------------------------------+
| Post Build                                                                   |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Cleanup                                                                      |
+------------------------------------------------------------------------------+

Purging /<<BUILDDIR>>
Not cleaning session: cloned chroot in use

+------------------------------------------------------------------------------+
| Summary                                                                      |
+------------------------------------------------------------------------------+

Build Architecture: armhf
Build-Space: 85356
Build-Time: 859
Distribution: stretch-staging
Host Architecture: armhf
Install-Time: 667
Job: consul_0.6.3~dfsg-2
Machine Architecture: armhf
Package: consul
Package-Time: 1570
Source-Version: 0.6.3~dfsg-2
Space: 85356
Status: successful
Version: 0.6.3~dfsg-2
--------------------------------------------------------------------------------
Finished at 20160328-0604
Build needed 00:26:10, 85356k disc space