Raspbian Package Auto-Building

Build log for consul (0.6.3~dfsg-1) on armhf

consul0.6.3~dfsg-1armhf → 2016-03-17 06:51:35

sbuild (Debian sbuild) 0.66.0 (04 Oct 2015) on bm-wb-04

+==============================================================================+
| consul 0.6.3~dfsg-1 (armhf)                                17 Mar 2016 06:22 |
+==============================================================================+

Package: consul
Version: 0.6.3~dfsg-1
Source Version: 0.6.3~dfsg-1
Distribution: stretch-staging
Machine Architecture: armhf
Host Architecture: armhf
Build Architecture: armhf

I: NOTICE: Log filtering will replace 'build/consul-tXjQ4n/consul-0.6.3~dfsg' with '<<PKGBUILDDIR>>'
I: NOTICE: Log filtering will replace 'build/consul-tXjQ4n' with '<<BUILDDIR>>'
I: NOTICE: Log filtering will replace 'var/lib/schroot/mount/stretch-staging-armhf-sbuild-16f760e8-1aba-4e06-962d-36c236f5e045' with '<<CHROOT>>'

+------------------------------------------------------------------------------+
| Update chroot                                                                |
+------------------------------------------------------------------------------+

Get:1 http://172.17.0.1/private stretch-staging InRelease [11.3 kB]
Get:2 http://172.17.0.1/private stretch-staging/main Sources [8812 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf Packages [10.9 MB]
Fetched 19.7 MB in 21s (910 kB/s)
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Fetch source files                                                           |
+------------------------------------------------------------------------------+


Check APT
---------

Checking available source versions...

Download source files with APT
------------------------------

Reading package lists...
NOTICE: 'consul' packaging is maintained in the 'Git' version control system at:
git://anonscm.debian.org/pkg-go/packages/golang-github-hashicorp-consul.git
Please use:
git clone git://anonscm.debian.org/pkg-go/packages/golang-github-hashicorp-consul.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 588 kB of source archives.
Get:1 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-1 (dsc) [3237 B]
Get:2 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-1 (tar) [576 kB]
Get:3 http://172.17.0.1/private stretch-staging/main consul 0.6.3~dfsg-1 (diff) [8760 B]
Fetched 588 kB in 0s (5969 kB/s)
Download complete and in download only mode

Check architectures
-------------------


Check dependencies
------------------

Merged Build-Depends: build-essential, fakeroot
Filtered Build-Depends: build-essential, fakeroot
dpkg-deb: building package 'sbuild-build-depends-core-dummy' in '/<<BUILDDIR>>/resolver-KK4jyU/apt_archive/sbuild-build-depends-core-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Sources [214 B]
Get:5 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ Packages [527 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install core build dependencies (apt-based resolver)                         |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  sbuild-build-depends-core-dummy
0 upgraded, 1 newly installed, 0 to remove and 76 not upgraded.
Need to get 0 B/768 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-KK4jyU/apt_archive ./ sbuild-build-depends-core-dummy 0.invalid.0 [768 B]
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package sbuild-build-depends-core-dummy.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13601 files and directories currently installed.)
Preparing to unpack .../sbuild-build-depends-core-dummy.deb ...
Unpacking sbuild-build-depends-core-dummy (0.invalid.0) ...
Setting up sbuild-build-depends-core-dummy (0.invalid.0) ...
W: No sandbox user '_apt' on the system, can not drop privileges
Merged Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev | golang-github-miekg-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-armon-gomdb-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-consul-migrate-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
Filtered Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-armon-gomdb-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-consul-migrate-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
dpkg-deb: building package 'sbuild-build-depends-consul-dummy' in '/<<BUILDDIR>>/resolver-IpBugf/apt_archive/sbuild-build-depends-consul-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Sources [508 B]
Get:5 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ Packages [821 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install consul build dependencies (apt-based resolver)                       |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  autotools-dev bsdmainutils ca-certificates debhelper dh-golang
  dh-strip-nondeterminism file gettext gettext-base golang-check.v1-dev
  golang-codegangsta-cli-dev golang-context-dev golang-dbus-dev golang-dns-dev
  golang-docker-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-armon-gomdb-dev golang-github-boltdb-bolt-dev
  golang-github-codegangsta-cli-dev golang-github-coreos-go-systemd-dev
  golang-github-datadog-datadog-go-dev golang-github-docker-docker-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-gorilla-mux-dev
  golang-github-hashicorp-consul-migrate-dev
  golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-raft-mdb-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev
  golang-github-julienschmidt-httprouter-dev golang-github-mitchellh-cli-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-common-dev golang-github-ryanuber-columnize-dev
  golang-github-sirupsen-logrus-dev golang-github-stretchr-testify-dev
  golang-github-ugorji-go-codec-dev golang-github-ugorji-go-msgpack-dev
  golang-github-vishvananda-netlink-dev golang-github-vishvananda-netns-dev
  golang-go golang-gocapability-dev golang-golang-x-crypto-dev
  golang-golang-x-net-dev golang-golang-x-sys-dev golang-gopkg-mgo.v2-dev
  golang-gopkg-tomb.v2-dev golang-gopkg-vmihailenco-msgpack.v2-dev
  golang-goprotobuf-dev golang-logrus-dev golang-objx-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-x-text-dev groff-base intltool-debian libarchive-zip-perl libcroco3
  libffi6 libfile-stripnondeterminism-perl libglib2.0-0 libicu55 liblmdb-dev
  liblmdb0 libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5 libsasl2-2
  libsasl2-dev libsasl2-modules-db libssl1.0.2 libunistring0 libxml2 man-db
  openssl po-debconf protobuf-compiler
Suggested packages:
  wamerican | wordlist whois vacation dh-make gettext-doc autopoint
  libasprintf-dev libgettextpo-dev bzr git golang-golang-x-tools mercurial
  subversion groff less www-browser libmail-box-perl
Recommended packages:
  curl | wget | lynx-cur pkg-config libglib2.0-data shared-mime-info
  xdg-user-dirs lmdb-doc libsasl2-modules xml-core libmail-sendmail-perl
The following NEW packages will be installed:
  autotools-dev bsdmainutils ca-certificates debhelper dh-golang
  dh-strip-nondeterminism file gettext gettext-base golang-check.v1-dev
  golang-codegangsta-cli-dev golang-context-dev golang-dbus-dev golang-dns-dev
  golang-docker-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-armon-gomdb-dev golang-github-boltdb-bolt-dev
  golang-github-codegangsta-cli-dev golang-github-coreos-go-systemd-dev
  golang-github-datadog-datadog-go-dev golang-github-docker-docker-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-gorilla-mux-dev
  golang-github-hashicorp-consul-migrate-dev
  golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-raft-mdb-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev
  golang-github-julienschmidt-httprouter-dev golang-github-mitchellh-cli-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-common-dev golang-github-ryanuber-columnize-dev
  golang-github-sirupsen-logrus-dev golang-github-stretchr-testify-dev
  golang-github-ugorji-go-codec-dev golang-github-ugorji-go-msgpack-dev
  golang-github-vishvananda-netlink-dev golang-github-vishvananda-netns-dev
  golang-go golang-gocapability-dev golang-golang-x-crypto-dev
  golang-golang-x-net-dev golang-golang-x-sys-dev golang-gopkg-mgo.v2-dev
  golang-gopkg-tomb.v2-dev golang-gopkg-vmihailenco-msgpack.v2-dev
  golang-goprotobuf-dev golang-logrus-dev golang-objx-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-x-text-dev groff-base intltool-debian libarchive-zip-perl libcroco3
  libffi6 libfile-stripnondeterminism-perl libglib2.0-0 libicu55 liblmdb-dev
  liblmdb0 libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5 libsasl2-2
  libsasl2-dev libsasl2-modules-db libssl1.0.2 libunistring0 libxml2 man-db
  openssl po-debconf protobuf-compiler sbuild-build-depends-consul-dummy
0 upgraded, 105 newly installed, 0 to remove and 76 not upgraded.
Need to get 51.7 MB/51.7 MB of archives.
After this operation, 289 MB of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-IpBugf/apt_archive ./ sbuild-build-depends-consul-dummy 0.invalid.0 [1062 B]
Get:2 http://172.17.0.1/private stretch-staging/main armhf groff-base armhf 1.22.3-7 [1083 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf bsdmainutils armhf 9.0.6 [177 kB]
Get:4 http://172.17.0.1/private stretch-staging/main armhf libpipeline1 armhf 1.4.1-2 [23.7 kB]
Get:5 http://172.17.0.1/private stretch-staging/main armhf man-db armhf 2.7.5-1 [975 kB]
Get:6 http://172.17.0.1/private stretch-staging/main armhf liblmdb0 armhf 0.9.17-3 [36.7 kB]
Get:7 http://172.17.0.1/private stretch-staging/main armhf liblmdb-dev armhf 0.9.17-3 [51.9 kB]
Get:8 http://172.17.0.1/private stretch-staging/main armhf libunistring0 armhf 0.9.3-5.2 [253 kB]
Get:9 http://172.17.0.1/private stretch-staging/main armhf libssl1.0.2 armhf 1.0.2g-1 [886 kB]
Get:10 http://172.17.0.1/private stretch-staging/main armhf libmagic1 armhf 1:5.25-2 [250 kB]
Get:11 http://172.17.0.1/private stretch-staging/main armhf file armhf 1:5.25-2 [61.2 kB]
Get:12 http://172.17.0.1/private stretch-staging/main armhf gettext-base armhf 0.19.7-2 [111 kB]
Get:13 http://172.17.0.1/private stretch-staging/main armhf libsasl2-modules-db armhf 2.1.26.dfsg1-14+b1 [65.8 kB]
Get:14 http://172.17.0.1/private stretch-staging/main armhf libsasl2-2 armhf 2.1.26.dfsg1-14+b1 [97.1 kB]
Get:15 http://172.17.0.1/private stretch-staging/main armhf libicu55 armhf 55.1-7 [7380 kB]
Get:16 http://172.17.0.1/private stretch-staging/main armhf libxml2 armhf 2.9.3+dfsg1-1 [800 kB]
Get:17 http://172.17.0.1/private stretch-staging/main armhf autotools-dev all 20150820.1 [71.7 kB]
Get:18 http://172.17.0.1/private stretch-staging/main armhf openssl armhf 1.0.2g-1 [666 kB]
Get:19 http://172.17.0.1/private stretch-staging/main armhf ca-certificates all 20160104 [200 kB]
Get:20 http://172.17.0.1/private stretch-staging/main armhf libffi6 armhf 3.2.1-4 [18.5 kB]
Get:21 http://172.17.0.1/private stretch-staging/main armhf libglib2.0-0 armhf 2.46.2-3 [2482 kB]
Get:22 http://172.17.0.1/private stretch-staging/main armhf libcroco3 armhf 0.6.11-1 [131 kB]
Get:23 http://172.17.0.1/private stretch-staging/main armhf gettext armhf 0.19.7-2 [1400 kB]
Get:24 http://172.17.0.1/private stretch-staging/main armhf intltool-debian all 0.35.0+20060710.4 [26.3 kB]
Get:25 http://172.17.0.1/private stretch-staging/main armhf po-debconf all 1.0.19 [249 kB]
Get:26 http://172.17.0.1/private stretch-staging/main armhf libarchive-zip-perl all 1.56-2 [94.9 kB]
Get:27 http://172.17.0.1/private stretch-staging/main armhf libfile-stripnondeterminism-perl all 0.016-1 [11.9 kB]
Get:28 http://172.17.0.1/private stretch-staging/main armhf dh-strip-nondeterminism all 0.016-1 [6998 B]
Get:29 http://172.17.0.1/private stretch-staging/main armhf debhelper all 9.20160306 [815 kB]
Get:30 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-docker-dev all 1.8.3~ds1-2 [223 kB]
Get:31 http://172.17.0.1/private stretch-staging/main armhf golang-docker-dev all 1.8.3~ds1-2 [32.2 kB]
Get:32 http://172.17.0.1/private stretch-staging/main armhf golang-src armhf 2:1.5.3-1+rpi1 [6363 kB]
Get:33 http://172.17.0.1/private stretch-staging/main armhf golang-go armhf 2:1.5.3-1+rpi1 [19.7 MB]
Get:34 http://172.17.0.1/private stretch-staging/main armhf libprotobuf9v5 armhf 2.6.1-1.3 [292 kB]
Get:35 http://172.17.0.1/private stretch-staging/main armhf libprotoc9v5 armhf 2.6.1-1.3 [241 kB]
Get:36 http://172.17.0.1/private stretch-staging/main armhf libsasl2-dev armhf 2.1.26.dfsg1-14+b1 [293 kB]
Get:37 http://172.17.0.1/private stretch-staging/main armhf protobuf-compiler armhf 2.6.1-1.3 [35.8 kB]
Get:38 http://172.17.0.1/private stretch-staging/main armhf dh-golang all 1.12 [9402 B]
Get:39 http://172.17.0.1/private stretch-staging/main armhf golang-check.v1-dev all 0.0+git20150729.11d3bc7-2 [29.0 kB]
Get:40 http://172.17.0.1/private stretch-staging/main armhf golang-github-codegangsta-cli-dev all 0.0~git20150117-3 [14.4 kB]
Get:41 http://172.17.0.1/private stretch-staging/main armhf golang-codegangsta-cli-dev all 0.0~git20150117-3 [2332 B]
Get:42 http://172.17.0.1/private stretch-staging/main armhf golang-context-dev all 0.0~git20140604.1.14f550f-1 [6280 B]
Get:43 http://172.17.0.1/private stretch-staging/main armhf golang-dbus-dev all 3-1 [39.4 kB]
Get:44 http://172.17.0.1/private stretch-staging/main armhf golang-dns-dev all 0.0~git20151030.0.6a15566-1 [128 kB]
Get:45 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-circbuf-dev all 0.0~git20150827.0.bbbad09-1 [3650 B]
Get:46 http://172.17.0.1/private stretch-staging/main armhf golang-github-julienschmidt-httprouter-dev all 1.1-1 [15.5 kB]
Get:47 http://172.17.0.1/private stretch-staging/main armhf golang-x-text-dev all 0+git20151217.cf49866-1 [2096 kB]
Get:48 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-crypto-dev all 1:0.0~git20151201.0.7b85b09-2 [802 kB]
Get:49 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-net-dev all 1:0.0+git20160110.4fd4a9f-1 [514 kB]
Get:50 http://172.17.0.1/private stretch-staging/main armhf golang-goprotobuf-dev armhf 0.0~git20150526-2 [700 kB]
Get:51 http://172.17.0.1/private stretch-staging/main armhf golang-protobuf-extensions-dev all 0+git20150513.fc2b8d3-4 [8694 B]
Get:52 http://172.17.0.1/private stretch-staging/main armhf golang-github-sirupsen-logrus-dev all 0.8.7-2 [25.1 kB]
Get:53 http://172.17.0.1/private stretch-staging/main armhf golang-logrus-dev all 0.8.7-2 [2892 B]
Get:54 http://172.17.0.1/private stretch-staging/main armhf golang-github-prometheus-common-dev all 0+git20160104.0a3005b-1 [49.7 kB]
Get:55 http://172.17.0.1/private stretch-staging/main armhf golang-procfs-dev all 0+git20150616.c91d8ee-1 [13.5 kB]
Get:56 http://172.17.0.1/private stretch-staging/main armhf golang-prometheus-client-dev all 0.7.0+ds-3 [88.4 kB]
Get:57 http://172.17.0.1/private stretch-staging/main armhf golang-github-datadog-datadog-go-dev all 0.0~git20150930.0.b050cd8-1 [7034 B]
Get:58 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-metrics-dev all 0.0~git20151207.0.06b6099-1 [13.0 kB]
Get:59 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-radix-dev all 0.0~git20150602.0.fbd82e8-1 [6472 B]
Get:60 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-gomdb-dev all 0.0~git20150106.0.151f2e0-1 [7438 B]
Get:61 http://172.17.0.1/private stretch-staging/main armhf golang-github-boltdb-bolt-dev all 1.1.0-1 [55.3 kB]
Get:62 http://172.17.0.1/private stretch-staging/main armhf golang-github-coreos-go-systemd-dev all 4-1 [28.9 kB]
Get:63 http://172.17.0.1/private stretch-staging/main armhf golang-github-elazarl-go-bindata-assetfs-dev all 0.0~git20151224.0.57eb5e1-1 [5088 B]
Get:64 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-specs-dev all 0.0~git20150829.0.e9cb564-1 [5492 B]
Get:65 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netns-dev all 0.0~git20150710.0.604eaf1-1 [5448 B]
Get:66 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netlink-dev all 0.0~git20160306.0.4fdf23c-1 [50.5 kB]
Get:67 http://172.17.0.1/private stretch-staging/main armhf golang-gocapability-dev all 0.0~git20150506.1.66ef2aa-1 [10.8 kB]
Get:68 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-runc-dev all 0.0.8+dfsg-1 [123 kB]
Get:69 http://172.17.0.1/private stretch-staging/main armhf golang-github-gorilla-mux-dev all 0.0~git20150814.0.f7b6aaa-1 [25.0 kB]
Get:70 http://172.17.0.1/private stretch-staging/main armhf golang-objx-dev all 0.0~git20140527-4 [20.1 kB]
Get:71 http://172.17.0.1/private stretch-staging/main armhf golang-github-stretchr-testify-dev all 1.0-2 [27.8 kB]
Get:72 http://172.17.0.1/private stretch-staging/main armhf golang-github-fsouza-go-dockerclient-dev all 0.0+git20150905-1 [150 kB]
Get:73 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-msgpack-dev all 0.0~git20130605.792643-1 [20.3 kB]
Get:74 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-codec-dev all 0.0~git20151130.0.357a44b-1 [127 kB]
Get:75 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-vmihailenco-msgpack.v2-dev all 2.4.7-1 [17.0 kB]
Get:76 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-tomb.v2-dev all 0.0~git20140626.14b3d72-1 [5140 B]
Get:77 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-mgo.v2-dev all 2015.12.06-1 [138 kB]
Get:78 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-msgpack-dev all 0.0~git20150518-1 [42.1 kB]
Get:79 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-dev all 0.0~git20150728.9b586e2-2 [49.3 kB]
Get:80 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-boltdb-dev all 0.0~git20150201.d1e82c1-1 [9744 B]
Get:81 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-mdb-dev all 0.0~git20150806.0.55f2947-1 [10.2 kB]
Get:82 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-consul-migrate-dev all 0.1.0-1 [14.0 kB]
Get:83 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-errwrap-dev all 0.0~git20141028.0.7554cd9-1 [9692 B]
Get:84 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-cleanhttp-dev all 0.0~git20151022.0.5df5ddc-1 [7408 B]
Get:85 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-checkpoint-dev all 0.0~git20151022.0.e4b2dc3-1 [11.2 kB]
Get:86 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-golang-lru-dev all 0.0~git20160207.0.a0d98a5-1 [12.9 kB]
Get:87 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-uuid-dev all 0.0~git20160218.0.6994546-1 [7306 B]
Get:88 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-immutable-radix-dev all 0.0~git20160222.0.8e8ed81-1 [13.4 kB]
Get:89 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-memdb-dev all 0.0~git20160301.0.98f52f5-1 [19.1 kB]
Get:90 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-multierror-dev all 0.0~git20150916.0.d30f099-1 [9274 B]
Get:91 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-sys-dev all 0.0~git20150612-1 [171 kB]
Get:92 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-reap-dev all 0.0~git20160113.0.2d85522-1 [9084 B]
Get:93 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-syslog-dev all 0.0~git20150218.0.42a2b57-1 [5336 B]
Get:94 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-hcl-dev all 0.0~git20151110.0.fa160f1-1 [42.7 kB]
Get:95 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-logutils-dev all 0.0~git20150609.0.0dc08b1-1 [8150 B]
Get:96 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-mdns-dev all 0.0~git20150317.0.2b439d3-1 [10.9 kB]
Get:97 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-memberlist-dev all 0.0~git20160225.0.ae9a8d9-1 [48.7 kB]
Get:98 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-net-rpc-msgpackrpc-dev all 0.0~git20151015.0.d31f7b9-1 [4004 B]
Get:99 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-yamux-dev all 0.0~git20151129.0.df94978-1 [20.0 kB]
Get:100 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-scada-client-dev all 0.0~git20150828.0.84989fd-1 [17.4 kB]
Get:101 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-cli-dev all 0.0~git20150618.0.8102d0e-1 [13.7 kB]
Get:102 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-mapstructure-dev all 0.0~git20150717.0.281073e-2 [14.4 kB]
Get:103 http://172.17.0.1/private stretch-staging/main armhf golang-github-ryanuber-columnize-dev all 2.0.1-1 [4476 B]
Get:104 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-serf-dev all 0.7.0~ds1-1 [110 kB]
Get:105 http://172.17.0.1/private stretch-staging/main armhf golang-github-inconshreveable-muxado-dev all 0.0~git20140312.0.f693c7e-1 [26.4 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 51.7 MB in 7s (6676 kB/s)
Selecting previously unselected package groff-base.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13601 files and directories currently installed.)
Preparing to unpack .../groff-base_1.22.3-7_armhf.deb ...
Unpacking groff-base (1.22.3-7) ...
Selecting previously unselected package bsdmainutils.
Preparing to unpack .../bsdmainutils_9.0.6_armhf.deb ...
Unpacking bsdmainutils (9.0.6) ...
Selecting previously unselected package libpipeline1:armhf.
Preparing to unpack .../libpipeline1_1.4.1-2_armhf.deb ...
Unpacking libpipeline1:armhf (1.4.1-2) ...
Selecting previously unselected package man-db.
Preparing to unpack .../man-db_2.7.5-1_armhf.deb ...
Unpacking man-db (2.7.5-1) ...
Selecting previously unselected package liblmdb0:armhf.
Preparing to unpack .../liblmdb0_0.9.17-3_armhf.deb ...
Unpacking liblmdb0:armhf (0.9.17-3) ...
Selecting previously unselected package liblmdb-dev:armhf.
Preparing to unpack .../liblmdb-dev_0.9.17-3_armhf.deb ...
Unpacking liblmdb-dev:armhf (0.9.17-3) ...
Selecting previously unselected package libunistring0:armhf.
Preparing to unpack .../libunistring0_0.9.3-5.2_armhf.deb ...
Unpacking libunistring0:armhf (0.9.3-5.2) ...
Selecting previously unselected package libssl1.0.2:armhf.
Preparing to unpack .../libssl1.0.2_1.0.2g-1_armhf.deb ...
Unpacking libssl1.0.2:armhf (1.0.2g-1) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../libmagic1_1%3a5.25-2_armhf.deb ...
Unpacking libmagic1:armhf (1:5.25-2) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2_armhf.deb ...
Unpacking file (1:5.25-2) ...
Selecting previously unselected package gettext-base.
Preparing to unpack .../gettext-base_0.19.7-2_armhf.deb ...
Unpacking gettext-base (0.19.7-2) ...
Selecting previously unselected package libsasl2-modules-db:armhf.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-modules-db:armhf (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package libsasl2-2:armhf.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-2:armhf (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package libicu55:armhf.
Preparing to unpack .../libicu55_55.1-7_armhf.deb ...
Unpacking libicu55:armhf (55.1-7) ...
Selecting previously unselected package libxml2:armhf.
Preparing to unpack .../libxml2_2.9.3+dfsg1-1_armhf.deb ...
Unpacking libxml2:armhf (2.9.3+dfsg1-1) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../autotools-dev_20150820.1_all.deb ...
Unpacking autotools-dev (20150820.1) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1_armhf.deb ...
Unpacking openssl (1.0.2g-1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104_all.deb ...
Unpacking ca-certificates (20160104) ...
Selecting previously unselected package libffi6:armhf.
Preparing to unpack .../libffi6_3.2.1-4_armhf.deb ...
Unpacking libffi6:armhf (3.2.1-4) ...
Selecting previously unselected package libglib2.0-0:armhf.
Preparing to unpack .../libglib2.0-0_2.46.2-3_armhf.deb ...
Unpacking libglib2.0-0:armhf (2.46.2-3) ...
Selecting previously unselected package libcroco3:armhf.
Preparing to unpack .../libcroco3_0.6.11-1_armhf.deb ...
Unpacking libcroco3:armhf (0.6.11-1) ...
Selecting previously unselected package gettext.
Preparing to unpack .../gettext_0.19.7-2_armhf.deb ...
Unpacking gettext (0.19.7-2) ...
Selecting previously unselected package intltool-debian.
Preparing to unpack .../intltool-debian_0.35.0+20060710.4_all.deb ...
Unpacking intltool-debian (0.35.0+20060710.4) ...
Selecting previously unselected package po-debconf.
Preparing to unpack .../po-debconf_1.0.19_all.deb ...
Unpacking po-debconf (1.0.19) ...
Selecting previously unselected package libarchive-zip-perl.
Preparing to unpack .../libarchive-zip-perl_1.56-2_all.deb ...
Unpacking libarchive-zip-perl (1.56-2) ...
Selecting previously unselected package libfile-stripnondeterminism-perl.
Preparing to unpack .../libfile-stripnondeterminism-perl_0.016-1_all.deb ...
Unpacking libfile-stripnondeterminism-perl (0.016-1) ...
Selecting previously unselected package dh-strip-nondeterminism.
Preparing to unpack .../dh-strip-nondeterminism_0.016-1_all.deb ...
Unpacking dh-strip-nondeterminism (0.016-1) ...
Selecting previously unselected package debhelper.
Preparing to unpack .../debhelper_9.20160306_all.deb ...
Unpacking debhelper (9.20160306) ...
Selecting previously unselected package golang-github-docker-docker-dev.
Preparing to unpack .../golang-github-docker-docker-dev_1.8.3~ds1-2_all.deb ...
Unpacking golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Selecting previously unselected package golang-docker-dev.
Preparing to unpack .../golang-docker-dev_1.8.3~ds1-2_all.deb ...
Unpacking golang-docker-dev (1.8.3~ds1-2) ...
Selecting previously unselected package golang-src.
Preparing to unpack .../golang-src_2%3a1.5.3-1+rpi1_armhf.deb ...
Unpacking golang-src (2:1.5.3-1+rpi1) ...
Selecting previously unselected package golang-go.
Preparing to unpack .../golang-go_2%3a1.5.3-1+rpi1_armhf.deb ...
Unpacking golang-go (2:1.5.3-1+rpi1) ...
Selecting previously unselected package libprotobuf9v5:armhf.
Preparing to unpack .../libprotobuf9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotobuf9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libprotoc9v5:armhf.
Preparing to unpack .../libprotoc9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotoc9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libsasl2-dev.
Preparing to unpack .../libsasl2-dev_2.1.26.dfsg1-14+b1_armhf.deb ...
Unpacking libsasl2-dev (2.1.26.dfsg1-14+b1) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../protobuf-compiler_2.6.1-1.3_armhf.deb ...
Unpacking protobuf-compiler (2.6.1-1.3) ...
Selecting previously unselected package dh-golang.
Preparing to unpack .../dh-golang_1.12_all.deb ...
Unpacking dh-golang (1.12) ...
Selecting previously unselected package golang-check.v1-dev.
Preparing to unpack .../golang-check.v1-dev_0.0+git20150729.11d3bc7-2_all.deb ...
Unpacking golang-check.v1-dev (0.0+git20150729.11d3bc7-2) ...
Selecting previously unselected package golang-github-codegangsta-cli-dev.
Preparing to unpack .../golang-github-codegangsta-cli-dev_0.0~git20150117-3_all.deb ...
Unpacking golang-github-codegangsta-cli-dev (0.0~git20150117-3) ...
Selecting previously unselected package golang-codegangsta-cli-dev.
Preparing to unpack .../golang-codegangsta-cli-dev_0.0~git20150117-3_all.deb ...
Unpacking golang-codegangsta-cli-dev (0.0~git20150117-3) ...
Selecting previously unselected package golang-context-dev.
Preparing to unpack .../golang-context-dev_0.0~git20140604.1.14f550f-1_all.deb ...
Unpacking golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Selecting previously unselected package golang-dbus-dev.
Preparing to unpack .../golang-dbus-dev_3-1_all.deb ...
Unpacking golang-dbus-dev (3-1) ...
Selecting previously unselected package golang-dns-dev.
Preparing to unpack .../golang-dns-dev_0.0~git20151030.0.6a15566-1_all.deb ...
Unpacking golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Selecting previously unselected package golang-github-armon-circbuf-dev.
Preparing to unpack .../golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1_all.deb ...
Unpacking golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Selecting previously unselected package golang-github-julienschmidt-httprouter-dev.
Preparing to unpack .../golang-github-julienschmidt-httprouter-dev_1.1-1_all.deb ...
Unpacking golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Selecting previously unselected package golang-x-text-dev.
Preparing to unpack .../golang-x-text-dev_0+git20151217.cf49866-1_all.deb ...
Unpacking golang-x-text-dev (0+git20151217.cf49866-1) ...
Selecting previously unselected package golang-golang-x-crypto-dev.
Preparing to unpack .../golang-golang-x-crypto-dev_1%3a0.0~git20151201.0.7b85b09-2_all.deb ...
Unpacking golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Selecting previously unselected package golang-golang-x-net-dev.
Preparing to unpack .../golang-golang-x-net-dev_1%3a0.0+git20160110.4fd4a9f-1_all.deb ...
Unpacking golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Selecting previously unselected package golang-goprotobuf-dev.
Preparing to unpack .../golang-goprotobuf-dev_0.0~git20150526-2_armhf.deb ...
Unpacking golang-goprotobuf-dev (0.0~git20150526-2) ...
Selecting previously unselected package golang-protobuf-extensions-dev.
Preparing to unpack .../golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4_all.deb ...
Unpacking golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Selecting previously unselected package golang-github-sirupsen-logrus-dev.
Preparing to unpack .../golang-github-sirupsen-logrus-dev_0.8.7-2_all.deb ...
Unpacking golang-github-sirupsen-logrus-dev (0.8.7-2) ...
Selecting previously unselected package golang-logrus-dev.
Preparing to unpack .../golang-logrus-dev_0.8.7-2_all.deb ...
Unpacking golang-logrus-dev (0.8.7-2) ...
Selecting previously unselected package golang-github-prometheus-common-dev.
Preparing to unpack .../golang-github-prometheus-common-dev_0+git20160104.0a3005b-1_all.deb ...
Unpacking golang-github-prometheus-common-dev (0+git20160104.0a3005b-1) ...
Selecting previously unselected package golang-procfs-dev.
Preparing to unpack .../golang-procfs-dev_0+git20150616.c91d8ee-1_all.deb ...
Unpacking golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Selecting previously unselected package golang-prometheus-client-dev.
Preparing to unpack .../golang-prometheus-client-dev_0.7.0+ds-3_all.deb ...
Unpacking golang-prometheus-client-dev (0.7.0+ds-3) ...
Selecting previously unselected package golang-github-datadog-datadog-go-dev.
Preparing to unpack .../golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1_all.deb ...
Unpacking golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Selecting previously unselected package golang-github-armon-go-metrics-dev.
Preparing to unpack .../golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1_all.deb ...
Unpacking golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Selecting previously unselected package golang-github-armon-go-radix-dev.
Preparing to unpack .../golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1_all.deb ...
Unpacking golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Selecting previously unselected package golang-github-armon-gomdb-dev.
Preparing to unpack .../golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1_all.deb ...
Unpacking golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Selecting previously unselected package golang-github-boltdb-bolt-dev.
Preparing to unpack .../golang-github-boltdb-bolt-dev_1.1.0-1_all.deb ...
Unpacking golang-github-boltdb-bolt-dev (1.1.0-1) ...
Selecting previously unselected package golang-github-coreos-go-systemd-dev.
Preparing to unpack .../golang-github-coreos-go-systemd-dev_4-1_all.deb ...
Unpacking golang-github-coreos-go-systemd-dev (4-1) ...
Selecting previously unselected package golang-github-elazarl-go-bindata-assetfs-dev.
Preparing to unpack .../golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1_all.deb ...
Unpacking golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Selecting previously unselected package golang-github-opencontainers-specs-dev.
Preparing to unpack .../golang-github-opencontainers-specs-dev_0.0~git20150829.0.e9cb564-1_all.deb ...
Unpacking golang-github-opencontainers-specs-dev (0.0~git20150829.0.e9cb564-1) ...
Selecting previously unselected package golang-github-vishvananda-netns-dev.
Preparing to unpack .../golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1_all.deb ...
Unpacking golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Selecting previously unselected package golang-github-vishvananda-netlink-dev.
Preparing to unpack .../golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1_all.deb ...
Unpacking golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Selecting previously unselected package golang-gocapability-dev.
Preparing to unpack .../golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1_all.deb ...
Unpacking golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Selecting previously unselected package golang-github-opencontainers-runc-dev.
Preparing to unpack .../golang-github-opencontainers-runc-dev_0.0.8+dfsg-1_all.deb ...
Unpacking golang-github-opencontainers-runc-dev (0.0.8+dfsg-1) ...
Selecting previously unselected package golang-github-gorilla-mux-dev.
Preparing to unpack .../golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1_all.deb ...
Unpacking golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Selecting previously unselected package golang-objx-dev.
Preparing to unpack .../golang-objx-dev_0.0~git20140527-4_all.deb ...
Unpacking golang-objx-dev (0.0~git20140527-4) ...
Selecting previously unselected package golang-github-stretchr-testify-dev.
Preparing to unpack .../golang-github-stretchr-testify-dev_1.0-2_all.deb ...
Unpacking golang-github-stretchr-testify-dev (1.0-2) ...
Selecting previously unselected package golang-github-fsouza-go-dockerclient-dev.
Preparing to unpack .../golang-github-fsouza-go-dockerclient-dev_0.0+git20150905-1_all.deb ...
Unpacking golang-github-fsouza-go-dockerclient-dev (0.0+git20150905-1) ...
Selecting previously unselected package golang-github-ugorji-go-msgpack-dev.
Preparing to unpack .../golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1_all.deb ...
Unpacking golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Selecting previously unselected package golang-github-ugorji-go-codec-dev.
Preparing to unpack .../golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1_all.deb ...
Unpacking golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Selecting previously unselected package golang-gopkg-vmihailenco-msgpack.v2-dev.
Preparing to unpack .../golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.7-1_all.deb ...
Unpacking golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.7-1) ...
Selecting previously unselected package golang-gopkg-tomb.v2-dev.
Preparing to unpack .../golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1_all.deb ...
Unpacking golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Selecting previously unselected package golang-gopkg-mgo.v2-dev.
Preparing to unpack .../golang-gopkg-mgo.v2-dev_2015.12.06-1_all.deb ...
Unpacking golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Selecting previously unselected package golang-github-hashicorp-go-msgpack-dev.
Preparing to unpack .../golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1_all.deb ...
Unpacking golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-dev.
Preparing to unpack .../golang-github-hashicorp-raft-dev_0.0~git20150728.9b586e2-2_all.deb ...
Unpacking golang-github-hashicorp-raft-dev (0.0~git20150728.9b586e2-2) ...
Selecting previously unselected package golang-github-hashicorp-raft-boltdb-dev.
Preparing to unpack .../golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1_all.deb ...
Unpacking golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-mdb-dev.
Preparing to unpack .../golang-github-hashicorp-raft-mdb-dev_0.0~git20150806.0.55f2947-1_all.deb ...
Unpacking golang-github-hashicorp-raft-mdb-dev (0.0~git20150806.0.55f2947-1) ...
Selecting previously unselected package golang-github-hashicorp-consul-migrate-dev.
Preparing to unpack .../golang-github-hashicorp-consul-migrate-dev_0.1.0-1_all.deb ...
Unpacking golang-github-hashicorp-consul-migrate-dev (0.1.0-1) ...
Selecting previously unselected package golang-github-hashicorp-errwrap-dev.
Preparing to unpack .../golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1_all.deb ...
Unpacking golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Selecting previously unselected package golang-github-hashicorp-go-cleanhttp-dev.
Preparing to unpack .../golang-github-hashicorp-go-cleanhttp-dev_0.0~git20151022.0.5df5ddc-1_all.deb ...
Unpacking golang-github-hashicorp-go-cleanhttp-dev (0.0~git20151022.0.5df5ddc-1) ...
Selecting previously unselected package golang-github-hashicorp-go-checkpoint-dev.
Preparing to unpack .../golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1_all.deb ...
Unpacking golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Selecting previously unselected package golang-github-hashicorp-golang-lru-dev.
Preparing to unpack .../golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1_all.deb ...
Unpacking golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Selecting previously unselected package golang-github-hashicorp-uuid-dev.
Preparing to unpack .../golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1_all.deb ...
Unpacking golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Selecting previously unselected package golang-github-hashicorp-go-immutable-radix-dev.
Preparing to unpack .../golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1_all.deb ...
Unpacking golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Selecting previously unselected package golang-github-hashicorp-go-memdb-dev.
Preparing to unpack .../golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1_all.deb ...
Unpacking golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Selecting previously unselected package golang-github-hashicorp-go-multierror-dev.
Preparing to unpack .../golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1_all.deb ...
Unpacking golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Selecting previously unselected package golang-golang-x-sys-dev.
Preparing to unpack .../golang-golang-x-sys-dev_0.0~git20150612-1_all.deb ...
Unpacking golang-golang-x-sys-dev (0.0~git20150612-1) ...
Selecting previously unselected package golang-github-hashicorp-go-reap-dev.
Preparing to unpack .../golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1_all.deb ...
Unpacking golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Selecting previously unselected package golang-github-hashicorp-go-syslog-dev.
Preparing to unpack .../golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1_all.deb ...
Unpacking golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Selecting previously unselected package golang-github-hashicorp-hcl-dev.
Preparing to unpack .../golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1_all.deb ...
Unpacking golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Selecting previously unselected package golang-github-hashicorp-logutils-dev.
Preparing to unpack .../golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1_all.deb ...
Unpacking golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Selecting previously unselected package golang-github-hashicorp-mdns-dev.
Preparing to unpack .../golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1_all.deb ...
Unpacking golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Selecting previously unselected package golang-github-hashicorp-memberlist-dev.
Preparing to unpack .../golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1_all.deb ...
Unpacking golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Selecting previously unselected package golang-github-hashicorp-net-rpc-msgpackrpc-dev.
Preparing to unpack .../golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151015.0.d31f7b9-1_all.deb ...
Unpacking golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151015.0.d31f7b9-1) ...
Selecting previously unselected package golang-github-hashicorp-yamux-dev.
Preparing to unpack .../golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1_all.deb ...
Unpacking golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Selecting previously unselected package golang-github-hashicorp-scada-client-dev.
Preparing to unpack .../golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1_all.deb ...
Unpacking golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Selecting previously unselected package golang-github-mitchellh-cli-dev.
Preparing to unpack .../golang-github-mitchellh-cli-dev_0.0~git20150618.0.8102d0e-1_all.deb ...
Unpacking golang-github-mitchellh-cli-dev (0.0~git20150618.0.8102d0e-1) ...
Selecting previously unselected package golang-github-mitchellh-mapstructure-dev.
Preparing to unpack .../golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2_all.deb ...
Unpacking golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Selecting previously unselected package golang-github-ryanuber-columnize-dev.
Preparing to unpack .../golang-github-ryanuber-columnize-dev_2.0.1-1_all.deb ...
Unpacking golang-github-ryanuber-columnize-dev (2.0.1-1) ...
Selecting previously unselected package golang-github-hashicorp-serf-dev.
Preparing to unpack .../golang-github-hashicorp-serf-dev_0.7.0~ds1-1_all.deb ...
Unpacking golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Selecting previously unselected package golang-github-inconshreveable-muxado-dev.
Preparing to unpack .../golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1_all.deb ...
Unpacking golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Selecting previously unselected package sbuild-build-depends-consul-dummy.
Preparing to unpack .../sbuild-build-depends-consul-dummy.deb ...
Unpacking sbuild-build-depends-consul-dummy (0.invalid.0) ...
Processing triggers for libc-bin (2.21-7) ...
Setting up groff-base (1.22.3-7) ...
Setting up bsdmainutils (9.0.6) ...
update-alternatives: using /usr/bin/bsd-write to provide /usr/bin/write (write) in auto mode
update-alternatives: using /usr/bin/bsd-from to provide /usr/bin/from (from) in auto mode
Setting up libpipeline1:armhf (1.4.1-2) ...
Setting up man-db (2.7.5-1) ...
Not building database; man-db/auto-update is not 'true'.
Setting up liblmdb0:armhf (0.9.17-3) ...
Setting up liblmdb-dev:armhf (0.9.17-3) ...
Setting up libunistring0:armhf (0.9.3-5.2) ...
Setting up libssl1.0.2:armhf (1.0.2g-1) ...
Setting up libmagic1:armhf (1:5.25-2) ...
Setting up file (1:5.25-2) ...
Setting up gettext-base (0.19.7-2) ...
Setting up libsasl2-modules-db:armhf (2.1.26.dfsg1-14+b1) ...
Setting up libsasl2-2:armhf (2.1.26.dfsg1-14+b1) ...
Setting up libicu55:armhf (55.1-7) ...
Setting up libxml2:armhf (2.9.3+dfsg1-1) ...
Setting up autotools-dev (20150820.1) ...
Setting up openssl (1.0.2g-1) ...
Setting up ca-certificates (20160104) ...
Setting up libffi6:armhf (3.2.1-4) ...
Setting up libglib2.0-0:armhf (2.46.2-3) ...
No schema files found: doing nothing.
Setting up libcroco3:armhf (0.6.11-1) ...
Setting up gettext (0.19.7-2) ...
Setting up intltool-debian (0.35.0+20060710.4) ...
Setting up po-debconf (1.0.19) ...
Setting up libarchive-zip-perl (1.56-2) ...
Setting up libfile-stripnondeterminism-perl (0.016-1) ...
Setting up golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Setting up golang-docker-dev (1.8.3~ds1-2) ...
Setting up golang-src (2:1.5.3-1+rpi1) ...
Setting up golang-go (2:1.5.3-1+rpi1) ...
update-alternatives: using /usr/lib/go/bin/go to provide /usr/bin/go (go) in auto mode
Setting up libprotobuf9v5:armhf (2.6.1-1.3) ...
Setting up libprotoc9v5:armhf (2.6.1-1.3) ...
Setting up libsasl2-dev (2.1.26.dfsg1-14+b1) ...
Setting up protobuf-compiler (2.6.1-1.3) ...
Setting up golang-check.v1-dev (0.0+git20150729.11d3bc7-2) ...
Setting up golang-github-codegangsta-cli-dev (0.0~git20150117-3) ...
Setting up golang-codegangsta-cli-dev (0.0~git20150117-3) ...
Setting up golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Setting up golang-dbus-dev (3-1) ...
Setting up golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Setting up golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Setting up golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Setting up golang-x-text-dev (0+git20151217.cf49866-1) ...
Setting up golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Setting up golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Setting up golang-goprotobuf-dev (0.0~git20150526-2) ...
Setting up golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Setting up golang-github-sirupsen-logrus-dev (0.8.7-2) ...
Setting up golang-logrus-dev (0.8.7-2) ...
Setting up golang-github-prometheus-common-dev (0+git20160104.0a3005b-1) ...
Setting up golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Setting up golang-prometheus-client-dev (0.7.0+ds-3) ...
Setting up golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Setting up golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Setting up golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Setting up golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Setting up golang-github-boltdb-bolt-dev (1.1.0-1) ...
Setting up golang-github-coreos-go-systemd-dev (4-1) ...
Setting up golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Setting up golang-github-opencontainers-specs-dev (0.0~git20150829.0.e9cb564-1) ...
Setting up golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Setting up golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Setting up golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Setting up golang-github-opencontainers-runc-dev (0.0.8+dfsg-1) ...
Setting up golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Setting up golang-objx-dev (0.0~git20140527-4) ...
Setting up golang-github-stretchr-testify-dev (1.0-2) ...
Setting up golang-github-fsouza-go-dockerclient-dev (0.0+git20150905-1) ...
Setting up golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Setting up golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Setting up golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.7-1) ...
Setting up golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Setting up golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Setting up golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Setting up golang-github-hashicorp-raft-dev (0.0~git20150728.9b586e2-2) ...
Setting up golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Setting up golang-github-hashicorp-raft-mdb-dev (0.0~git20150806.0.55f2947-1) ...
Setting up golang-github-hashicorp-consul-migrate-dev (0.1.0-1) ...
Setting up golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Setting up golang-github-hashicorp-go-cleanhttp-dev (0.0~git20151022.0.5df5ddc-1) ...
Setting up golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Setting up golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Setting up golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Setting up golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Setting up golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Setting up golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Setting up golang-golang-x-sys-dev (0.0~git20150612-1) ...
Setting up golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Setting up golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Setting up golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Setting up golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Setting up golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Setting up golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Setting up golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151015.0.d31f7b9-1) ...
Setting up golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Setting up golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Setting up golang-github-mitchellh-cli-dev (0.0~git20150618.0.8102d0e-1) ...
Setting up golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Setting up golang-github-ryanuber-columnize-dev (2.0.1-1) ...
Setting up golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Setting up golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Setting up debhelper (9.20160306) ...
Setting up dh-golang (1.12) ...
Setting up sbuild-build-depends-consul-dummy (0.invalid.0) ...
Setting up dh-strip-nondeterminism (0.016-1) ...
Processing triggers for libc-bin (2.21-7) ...
Processing triggers for ca-certificates (20160104) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Build environment                                                            |
+------------------------------------------------------------------------------+

Kernel: Linux 3.19.0-trunk-armmp armhf (armv7l)
Toolchain package versions: binutils_2.26-3 dpkg-dev_1.18.4 g++-5_5.3.1-8+rpi1 gcc-5_5.3.1-8+rpi1 libc6-dev_2.21-7 libstdc++-4.9-dev_4.9.3-10 libstdc++-5-dev_5.3.1-8+rpi1 libstdc++6_5.3.1-8+rpi1 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch
Package versions: acl_2.2.52-3 adduser_3.113+nmu3 apt_1.2.3 autotools-dev_20150820.1 base-files_9.5+rpi1 base-passwd_3.5.39 bash_4.3-14 binutils_2.26-3 bsdmainutils_9.0.6 bsdutils_1:2.27.1-3 build-essential_11.7 bzip2_1.0.6-8 ca-certificates_20160104 coreutils_8.24-1 cpio_2.11+dfsg-4.1 cpp_4:5.3.1-1+rpi1 cpp-5_5.3.1-8+rpi1 dash_0.5.8-2.1 debconf_1.5.58 debfoster_2.7-2 debhelper_9.20160306 debianutils_4.7 dh-golang_1.12 dh-strip-nondeterminism_0.016-1 diffutils_1:3.3-3 dmsetup_2:1.02.115-2 dpkg_1.18.4 dpkg-dev_1.18.4 e2fslibs_1.42.13-1 e2fsprogs_1.42.13-1 fakeroot_1.20.2-1 file_1:5.25-2 findutils_4.6.0+git+20160126-2 g++_4:5.3.1-1+rpi1 g++-5_5.3.1-8+rpi1 gcc_4:5.3.1-1+rpi1 gcc-4.6-base_4.6.4-5+rpi1 gcc-4.7-base_4.7.3-11+rpi1 gcc-4.8-base_4.8.5-4 gcc-4.9-base_4.9.3-10 gcc-5_5.3.1-8+rpi1 gcc-5-base_5.3.1-8+rpi1 gettext_0.19.7-2 gettext-base_0.19.7-2 gnupg_1.4.20-1 golang-check.v1-dev_0.0+git20150729.11d3bc7-2 golang-codegangsta-cli-dev_0.0~git20150117-3 golang-context-dev_0.0~git20140604.1.14f550f-1 golang-dbus-dev_3-1 golang-dns-dev_0.0~git20151030.0.6a15566-1 golang-docker-dev_1.8.3~ds1-2 golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1 golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1 golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1 golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1 golang-github-boltdb-bolt-dev_1.1.0-1 golang-github-codegangsta-cli-dev_0.0~git20150117-3 golang-github-coreos-go-systemd-dev_4-1 golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1 golang-github-docker-docker-dev_1.8.3~ds1-2 golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1 golang-github-fsouza-go-dockerclient-dev_0.0+git20150905-1 golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1 golang-github-hashicorp-consul-migrate-dev_0.1.0-1 golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1 golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1 golang-github-hashicorp-go-cleanhttp-dev_0.0~git20151022.0.5df5ddc-1 golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1 golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1 golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1 golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1 golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1 golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1 golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1 golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1 golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1 golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1 golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1 golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151015.0.d31f7b9-1 golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1 golang-github-hashicorp-raft-dev_0.0~git20150728.9b586e2-2 golang-github-hashicorp-raft-mdb-dev_0.0~git20150806.0.55f2947-1 golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1 golang-github-hashicorp-serf-dev_0.7.0~ds1-1 golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1 golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1 golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1 golang-github-julienschmidt-httprouter-dev_1.1-1 golang-github-mitchellh-cli-dev_0.0~git20150618.0.8102d0e-1 golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2 golang-github-opencontainers-runc-dev_0.0.8+dfsg-1 golang-github-opencontainers-specs-dev_0.0~git20150829.0.e9cb564-1 golang-github-prometheus-common-dev_0+git20160104.0a3005b-1 golang-github-ryanuber-columnize-dev_2.0.1-1 golang-github-sirupsen-logrus-dev_0.8.7-2 golang-github-stretchr-testify-dev_1.0-2 golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1 golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1 golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1 golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1 golang-go_2:1.5.3-1+rpi1 golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1 golang-golang-x-crypto-dev_1:0.0~git20151201.0.7b85b09-2 golang-golang-x-net-dev_1:0.0+git20160110.4fd4a9f-1 golang-golang-x-sys-dev_0.0~git20150612-1 golang-gopkg-mgo.v2-dev_2015.12.06-1 golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1 golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.7-1 golang-goprotobuf-dev_0.0~git20150526-2 golang-logrus-dev_0.8.7-2 golang-objx-dev_0.0~git20140527-4 golang-procfs-dev_0+git20150616.c91d8ee-1 golang-prometheus-client-dev_0.7.0+ds-3 golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4 golang-src_2:1.5.3-1+rpi1 golang-x-text-dev_0+git20151217.cf49866-1 gpgv_1.4.20-1 grep_2.22-1 groff-base_1.22.3-7 gzip_1.6-4 hostname_3.16 init_1.24 init-system-helpers_1.24 initramfs-tools_0.120 initscripts_2.88dsf-59.2 insserv_1.14.0-5.2 intltool-debian_0.35.0+20060710.4 klibc-utils_2.0.4-7+rpi1 kmod_22-1 libacl1_2.2.52-3 libapparmor1_2.10-3 libapt-pkg4.12_1.0.9.10 libapt-pkg5.0_1.2.3 libarchive-zip-perl_1.56-2 libasan1_4.9.3-10 libasan2_5.3.1-8+rpi1 libatomic1_5.3.1-8+rpi1 libattr1_1:2.4.47-2 libaudit-common_1:2.4.5-1 libaudit1_1:2.4.5-1 libblkid1_2.27.1-3 libbz2-1.0_1.0.6-8 libc-bin_2.21-7 libc-dev-bin_2.21-7 libc6_2.21-7 libc6-dev_2.21-7 libcap2_1:2.24-12 libcap2-bin_1:2.24-12 libcc1-0_5.3.1-8+rpi1 libcomerr2_1.42.13-1 libcroco3_0.6.11-1 libcryptsetup4_2:1.7.0-2 libdb5.3_5.3.28-11 libdbus-1-3_1.10.6-1 libdebconfclient0_0.204 libdevmapper1.02.1_2:1.02.115-2 libdpkg-perl_1.18.4 libdrm2_2.4.66-2 libfakeroot_1.20.2-1 libfdisk1_2.27.1-3 libffi6_3.2.1-4 libfile-stripnondeterminism-perl_0.016-1 libgc1c2_1:7.4.2-7.3 libgcc-4.9-dev_4.9.3-10 libgcc-5-dev_5.3.1-8+rpi1 libgcc1_1:5.3.1-8+rpi1 libgcrypt20_1.6.4-5 libgdbm3_1.8.3-13.1 libglib2.0-0_2.46.2-3 libgmp10_2:6.1.0+dfsg-2 libgomp1_5.3.1-8+rpi1 libgpg-error0_1.21-1 libicu55_55.1-7 libisl15_0.16.1-1 libklibc_2.0.4-7+rpi1 libkmod2_22-1 liblmdb-dev_0.9.17-3 liblmdb0_0.9.17-3 liblz4-1_0.0~r131-1 liblzma5_5.1.1alpha+20120614-2.1 libmagic1_1:5.25-2 libmount1_2.27.1-3 libmpc3_1.0.3-1 libmpfr4_3.1.3-2 libncurses5_6.0+20151024-2 libncursesw5_6.0+20151024-2 libpam-modules_1.1.8-3.2 libpam-modules-bin_1.1.8-3.2 libpam-runtime_1.1.8-3.2 libpam0g_1.1.8-3.2 libpcre3_2:8.38-1 libperl5.22_5.22.1-5 libpipeline1_1.4.1-2 libpng12-0_1.2.54-1 libprocps3_2:3.3.9-9 libprocps5_2:3.3.11-3 libprotobuf9v5_2.6.1-1.3 libprotoc9v5_2.6.1-1.3 libreadline6_6.3-8+b3 libsasl2-2_2.1.26.dfsg1-14+b1 libsasl2-dev_2.1.26.dfsg1-14+b1 libsasl2-modules-db_2.1.26.dfsg1-14+b1 libseccomp2_2.2.3-2 libselinux1_2.4-3 libsemanage-common_2.4-3 libsemanage1_2.4-3 libsepol1_2.4-2 libslang2_2.3.0-2+b1 libsmartcols1_2.27.1-3 libss2_1.42.13-1 libssl1.0.2_1.0.2g-1 libstdc++-4.9-dev_4.9.3-10 libstdc++-5-dev_5.3.1-8+rpi1 libstdc++6_5.3.1-8+rpi1 libsystemd0_228-6 libtimedate-perl_2.3000-2 libtinfo5_6.0+20151024-2 libubsan0_5.3.1-8+rpi1 libudev1_228-6 libunistring0_0.9.3-5.2 libusb-0.1-4_2:0.1.12-28 libustr-1.0-1_1.0.4-5 libuuid1_2.27.1-3 libxml2_2.9.3+dfsg1-1 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch login_1:4.2-3.1 lsb-base_9.20160110+rpi1 make_4.1-5 makedev_2.3.1-93 man-db_2.7.5-1 manpages_4.04-1 mawk_1.3.3-17 mount_2.27.1-3 multiarch-support_2.21-7 nano_2.5.1-1 ncurses-base_6.0+20151024-2 ncurses-bin_6.0+20151024-2 openssl_1.0.2g-1 passwd_1:4.2-3.1 patch_2.7.5-1 perl_5.22.1-5 perl-base_5.22.1-5 perl-modules-5.22_5.22.1-7 po-debconf_1.0.19 procps_2:3.3.11-3 protobuf-compiler_2.6.1-1.3 raspbian-archive-keyring_20120528.2 readline-common_6.3-8 sbuild-build-depends-consul-dummy_0.invalid.0 sbuild-build-depends-core-dummy_0.invalid.0 sed_4.2.2-6.1 sensible-utils_0.0.9 startpar_0.59-3 systemd_228-6 systemd-sysv_228-6 sysv-rc_2.88dsf-59.2 sysvinit-utils_2.88dsf-59.2 tar_1.28-2.1 tzdata_2016a-1 udev_228-6 util-linux_2.27.1-3 xz-utils_5.1.1alpha+20120614-2.1 zlib1g_1:1.2.8.dfsg-2+b1

+------------------------------------------------------------------------------+
| Build                                                                        |
+------------------------------------------------------------------------------+


Unpack source
-------------

gpgv: keyblock resource `/sbuild-nonexistent/.gnupg/trustedkeys.gpg': file open error
gpgv: Signature made Tue Mar  8 23:44:07 2016 UTC using RSA key ID 53968D1B
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./consul_0.6.3~dfsg-1.dsc
dpkg-source: info: extracting consul in consul-0.6.3~dfsg
dpkg-source: info: unpacking consul_0.6.3~dfsg.orig.tar.xz
dpkg-source: info: unpacking consul_0.6.3~dfsg-1.debian.tar.xz
dpkg-source: info: applying 0001-update-test-fixture-paths.patch

Check disc space
----------------

Sufficient free space for build

User Environment
----------------

DEB_BUILD_OPTIONS=parallel=4
HOME=/sbuild-nonexistent
LOGNAME=buildd
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
SCHROOT_ALIAS_NAME=stretch-staging-armhf-sbuild
SCHROOT_CHROOT_NAME=stretch-staging-armhf-sbuild
SCHROOT_COMMAND=env
SCHROOT_GID=109
SCHROOT_GROUP=buildd
SCHROOT_SESSION_ID=stretch-staging-armhf-sbuild-16f760e8-1aba-4e06-962d-36c236f5e045
SCHROOT_UID=104
SCHROOT_USER=buildd
SHELL=/bin/sh
TERM=xterm
USER=buildd

dpkg-buildpackage
-----------------

dpkg-buildpackage: source package consul
dpkg-buildpackage: source version 0.6.3~dfsg-1
dpkg-buildpackage: source distribution unstable
 dpkg-source --before-build consul-0.6.3~dfsg
dpkg-buildpackage: host architecture armhf
 fakeroot debian/rules clean
dh clean --buildsystem=golang --with=golang
   dh_testdir -O--buildsystem=golang
   dh_auto_clean -O--buildsystem=golang
   dh_clean -O--buildsystem=golang
 debian/rules build-arch
dh build-arch --buildsystem=golang --with=golang
   dh_testdir -a -O--buildsystem=golang
   dh_update_autotools_config -a -O--buildsystem=golang
   dh_auto_configure -a -O--buildsystem=golang
   dh_auto_build -a -O--buildsystem=golang
	go install -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/volume
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise
github.com/hashicorp/go-cleanhttp
github.com/hashicorp/serf/coordinate
github.com/armon/circbuf
github.com/armon/go-metrics
github.com/DataDog/datadog-go/statsd
github.com/elazarl/go-bindata-assetfs
github.com/hashicorp/consul/api
github.com/armon/go-metrics/datadog
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/parsers
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ulimit
github.com/Sirupsen/logrus
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/units
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils
github.com/opencontainers/runc/libcontainer/user
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive
github.com/armon/go-radix
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir
github.com/hashicorp/golang-lru/simplelru
github.com/hashicorp/hcl/hcl/strconv
github.com/hashicorp/golang-lru
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/go-msgpack/codec
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/json/token
github.com/hashicorp/hcl/json/scanner
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/go-immutable-radix
github.com/hashicorp/hcl
github.com/hashicorp/go-memdb
github.com/fsouza/go-dockerclient
github.com/hashicorp/consul/acl
github.com/hashicorp/consul/tlsutil
github.com/hashicorp/errwrap
github.com/hashicorp/go-multierror
github.com/boltdb/bolt
github.com/hashicorp/yamux
github.com/hashicorp/consul/consul/structs
github.com/hashicorp/memberlist
github.com/hashicorp/consul/consul/state
github.com/hashicorp/net-rpc-msgpackrpc
github.com/hashicorp/raft
github.com/inconshreveable/muxado/proto/buffer
github.com/inconshreveable/muxado/proto/frame
github.com/hashicorp/serf/serf
github.com/hashicorp/consul/watch
github.com/inconshreveable/muxado/proto
github.com/hashicorp/go-checkpoint
golang.org/x/sys/unix
github.com/inconshreveable/muxado/proto/ext
github.com/inconshreveable/muxado
github.com/hashicorp/raft-boltdb
github.com/hashicorp/go-syslog
github.com/hashicorp/logutils
github.com/hashicorp/scada-client
github.com/miekg/dns
github.com/hashicorp/consul/consul
github.com/hashicorp/go-reap
golang.org/x/crypto/ssh/terminal
github.com/mitchellh/mapstructure
github.com/mitchellh/cli
github.com/ryanuber/columnize
github.com/hashicorp/consul/testutil
github.com/hashicorp/consul/command/agent
github.com/hashicorp/consul/command
github.com/hashicorp/consul
   debian/rules override_dh_auto_test
make[1]: Entering directory '/<<PKGBUILDDIR>>'
## TODO patch out tests that rely on network via -test.short or
## something (which doesn't appear to be used anywhere ATM, so might
## be amenable as a PR to upstream)
dh_auto_test
	go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
# github.com/hashicorp/consul/command/agent
src/github.com/hashicorp/consul/command/agent/dns_test.go:1762: undefined: dns.ErrTruncated
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul	0.097s
=== RUN   TestRootACL
--- PASS: TestRootACL (0.00s)
=== RUN   TestStaticACL
--- PASS: TestStaticACL (0.00s)
=== RUN   TestPolicyACL
--- PASS: TestPolicyACL (0.00s)
=== RUN   TestPolicyACL_Parent
--- PASS: TestPolicyACL_Parent (0.00s)
=== RUN   TestPolicyACL_Keyring
--- PASS: TestPolicyACL_Keyring (0.00s)
=== RUN   TestCache_GetPolicy
--- PASS: TestCache_GetPolicy (0.00s)
=== RUN   TestCache_GetACL
--- PASS: TestCache_GetACL (0.00s)
=== RUN   TestCache_ClearACL
--- PASS: TestCache_ClearACL (0.01s)
=== RUN   TestCache_Purge
--- PASS: TestCache_Purge (0.02s)
=== RUN   TestCache_GetACLPolicy
--- PASS: TestCache_GetACLPolicy (0.01s)
=== RUN   TestCache_GetACL_Parent
--- PASS: TestCache_GetACL_Parent (0.01s)
=== RUN   TestCache_GetACL_ParentCache
--- PASS: TestCache_GetACL_ParentCache (0.00s)
=== RUN   TestParse
--- PASS: TestParse (0.00s)
=== RUN   TestParse_JSON
--- PASS: TestParse_JSON (0.00s)
=== RUN   TestACLPolicy_badPolicy
--- PASS: TestACLPolicy_badPolicy (0.00s)
PASS
ok  	github.com/hashicorp/consul/acl	0.104s
=== RUN   TestACL_CreateDestroy
=== RUN   TestACL_CloneDestroy
=== RUN   TestACL_Info
=== RUN   TestACL_List
=== RUN   TestAgent_Self
=== RUN   TestAgent_Members
=== RUN   TestAgent_Services
=== RUN   TestAgent_Services_CheckPassing
=== RUN   TestAgent_Services_CheckBadStatus
=== RUN   TestAgent_ServiceAddress
=== RUN   TestAgent_Services_MultipleChecks
=== RUN   TestAgent_SetTTLStatus
=== RUN   TestAgent_Checks
=== RUN   TestAgent_CheckStartPassing
=== RUN   TestAgent_Checks_serviceBound
=== RUN   TestAgent_Checks_Docker
=== RUN   TestAgent_Join
=== RUN   TestAgent_ForceLeave
=== RUN   TestServiceMaintenance
=== RUN   TestNodeMaintenance
=== RUN   TestDefaultConfig_env
=== RUN   TestSetQueryOptions
=== RUN   TestSetWriteOptions
=== RUN   TestRequestToHTTP
=== RUN   TestParseQueryMeta
=== RUN   TestAPI_UnixSocket
=== RUN   TestAPI_durToMsec
--- PASS: TestAPI_durToMsec (0.00s)
=== RUN   TestAPI_IsServerError
--- PASS: TestAPI_IsServerError (0.00s)
=== RUN   TestCatalog_Datacenters
=== RUN   TestCatalog_Nodes
=== RUN   TestCatalog_Services
=== RUN   TestCatalog_Service
=== RUN   TestCatalog_Node
=== RUN   TestCatalog_Registration
=== RUN   TestCoordinate_Datacenters
=== RUN   TestCoordinate_Nodes
=== RUN   TestEvent_FireList
=== RUN   TestHealth_Node
=== RUN   TestHealth_Checks
=== RUN   TestHealth_Service
=== RUN   TestHealth_State
=== RUN   TestClientPutGetDelete
=== RUN   TestClient_List_DeleteRecurse
=== RUN   TestClient_DeleteCAS
=== RUN   TestClient_CAS
=== RUN   TestClient_WatchGet
=== RUN   TestClient_WatchList
=== RUN   TestClient_Keys_DeleteRecurse
=== RUN   TestClient_AcquireRelease
=== RUN   TestLock_LockUnlock
=== RUN   TestLock_ForceInvalidate
=== RUN   TestLock_DeleteKey
=== RUN   TestLock_Contend
=== RUN   TestLock_Destroy
=== RUN   TestLock_Conflict
=== RUN   TestLock_ReclaimLock
=== RUN   TestLock_MonitorRetry
=== RUN   TestLock_OneShot
=== RUN   TestPreparedQuery
=== RUN   TestSemaphore_AcquireRelease
=== RUN   TestSemaphore_ForceInvalidate
=== RUN   TestSemaphore_DeleteKey
=== RUN   TestSemaphore_Contend
=== RUN   TestSemaphore_BadLimit
=== RUN   TestSemaphore_Destroy
=== RUN   TestSemaphore_Conflict
=== RUN   TestSemaphore_MonitorRetry
=== RUN   TestSemaphore_OneShot
=== RUN   TestSession_CreateDestroy
=== RUN   TestSession_CreateRenewDestroy
=== RUN   TestSession_CreateRenewDestroyRenew
=== RUN   TestSession_CreateDestroyRenewPeriodic
=== RUN   TestSession_Info
=== RUN   TestSession_Node
=== RUN   TestSession_List
=== RUN   TestStatusLeader
=== RUN   TestStatusPeers
--- SKIP: TestACL_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Self (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Members (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckBadStatus (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ServiceAddress (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_MultipleChecks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_SetTTLStatus (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_CheckStartPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks_serviceBound (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks_Docker (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CloneDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ForceLeave (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestServiceMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestNodeMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Join (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- PASS: TestDefaultConfig_env (0.00s)
--- SKIP: TestSetQueryOptions (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSetWriteOptions (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestRequestToHTTP (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- PASS: TestParseQueryMeta (0.00s)
--- SKIP: TestCatalog_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAPI_UnixSocket (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Registration (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestEvent_FireList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_State (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClientPutGetDelete (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_List_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_DeleteCAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_CAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchGet (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_Keys_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_AcquireRelease (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_LockUnlock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ReclaimLock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_OneShot (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestPreparedQuery (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_AcquireRelease (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_BadLimit (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_OneShot (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroyRenew (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroyRenewPeriodic (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusLeader (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusPeers (0.00s)
	server.go:143: consul not found on $PATH, skipping
PASS
ok  	github.com/hashicorp/consul/api	0.120s
=== RUN   TestConfigTestCommand_implements
--- PASS: TestConfigTestCommand_implements (0.00s)
=== RUN   TestConfigTestCommandFailOnEmptyFile
--- PASS: TestConfigTestCommandFailOnEmptyFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnEmptyDir
--- PASS: TestConfigTestCommandSucceedOnEmptyDir (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigFile
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigDir
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigDir (0.00s)
=== RUN   TestEventCommand_implements
--- PASS: TestEventCommand_implements (0.00s)
=== RUN   TestEventCommandRun
2016/03/17 06:34:59 [DEBUG] http: Request GET /v1/agent/self (3.522334ms) from=127.0.0.1:32844
2016/03/17 06:34:59 [DEBUG] http: Request PUT /v1/event/fire/cmd (1.136334ms) from=127.0.0.1:32844
2016/03/17 06:34:59 [DEBUG] http: Shutting down http server (127.0.0.1:10411)
--- PASS: TestEventCommandRun (1.08s)
=== RUN   TestExecCommand_implements
--- PASS: TestExecCommand_implements (0.00s)
=== RUN   TestExecCommandRun
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (438.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (345.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (354.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (341.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (379.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (323.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (313.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (333.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (328.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (325µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (289.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (299.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (321µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (304.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (358µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (419.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (347µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (373.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (322.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (327µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (375.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (371.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (349.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (422.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (424µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (380.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (360.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (359µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (327.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (393µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (483.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (326.667µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (295.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (306µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (357.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (433.333µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (278.334µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (299.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (321.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (314.666µs) from=127.0.0.1:60827
2016/03/17 06:35:00 [DEBUG] http: Request GET /v1/catalog/nodes (364.333µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (627µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (327.333µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (326.334µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (279.666µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (449µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (297µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (536.667µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (310.666µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (289.667µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (337µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/catalog/nodes (382.667µs) from=127.0.0.1:60827
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/agent/self (903.667µs) from=127.0.0.1:60828
2016/03/17 06:35:01 [DEBUG] http: Request PUT /v1/session/create (165.542ms) from=127.0.0.1:60828
2016/03/17 06:35:01 [DEBUG] http: Request PUT /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/job?acquire=0bd87b40-c684-6e28-37fb-e0648093640f (221.919ms) from=127.0.0.1:60828
2016/03/17 06:35:01 [DEBUG] http: Request PUT /v1/event/fire/_rexec (524.334µs) from=127.0.0.1:60828
2016/03/17 06:35:01 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/?keys=&wait=400ms (459µs) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/?index=5&keys=&wait=400ms (218.699667ms) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/?index=6&keys=&wait=400ms (245.058ms) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/Node%202/out/00000 (762.333µs) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/?index=7&keys=&wait=400ms (160.107ms) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/Node%202/exit (431.334µs) from=127.0.0.1:60828
2016/03/17 06:35:02 [DEBUG] http: Request GET /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f/?index=8&keys=&wait=400ms (421.076333ms) from=127.0.0.1:60828
2016/03/17 06:35:03 [DEBUG] http: Request PUT /v1/session/destroy/0bd87b40-c684-6e28-37fb-e0648093640f (190.546667ms) from=127.0.0.1:60829
2016/03/17 06:35:03 [DEBUG] http: Request DELETE /v1/kv/_rexec/0bd87b40-c684-6e28-37fb-e0648093640f?recurse= (150.204ms) from=127.0.0.1:60828
2016/03/17 06:35:03 [DEBUG] http: Request PUT /v1/session/destroy/0bd87b40-c684-6e28-37fb-e0648093640f (148.725333ms) from=127.0.0.1:60828
2016/03/17 06:35:03 [DEBUG] http: Shutting down http server (127.0.0.1:10421)
--- PASS: TestExecCommandRun (3.74s)
=== RUN   TestExecCommandRun_CrossDC
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (375µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (381.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (403.667µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (394µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (476.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (764.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (338.667µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (375µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (472.667µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (385µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (347µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (381.666µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (389.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (263.333µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (348.334µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (336µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (316.334µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:41348
2016/03/17 06:35:04 [DEBUG] http: Request GET /v1/catalog/nodes (310.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (290.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (287.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (309.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (281.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (309.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (277.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (303.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (266µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (289.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (320.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (307.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (333.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (289.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (306µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (448.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (459.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (306µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (316.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (289.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (346µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (320.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (290.334µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (327.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (327.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (428.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (323.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (312.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (468.333µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (288.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (298.666µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (322.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (328.667µs) from=127.0.0.1:41348
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (312.334µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (344.667µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (336µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (315.667µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (286.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (285.666µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (320.667µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (340.666µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (416µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (254.666µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (442.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (340.666µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (422.334µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:34718
2016/03/17 06:35:05 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (407µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (440µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (539.667µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (302.667µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:34718
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/agent/self?dc=dc2 (766.667µs) from=127.0.0.1:41351
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/health/service/consul?dc=dc2&passing=1 (5.343666ms) from=127.0.0.1:41351
2016/03/17 06:35:06 [DEBUG] http: Request PUT /v1/session/create?dc=dc2 (149.911333ms) from=127.0.0.1:41351
2016/03/17 06:35:06 [DEBUG] http: Request PUT /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/job?acquire=ba7f7a53-653f-7282-eeea-e3050a5a1594&dc=dc2 (240.236667ms) from=127.0.0.1:41351
2016/03/17 06:35:06 [DEBUG] http: Request PUT /v1/event/fire/_rexec?dc=dc2 (6.566333ms) from=127.0.0.1:41351
2016/03/17 06:35:06 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/?dc=dc2&keys=&wait=400ms (2.069333ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/?dc=dc2&index=5&keys=&wait=400ms (209.431ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/?dc=dc2&index=6&keys=&wait=400ms (236.342334ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/Node%204/out/00000?dc=dc2 (2.429ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/?dc=dc2&index=7&keys=&wait=400ms (125.552ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/Node%204/exit?dc=dc2 (9.042ms) from=127.0.0.1:41351
2016/03/17 06:35:07 [DEBUG] http: Request GET /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594/?dc=dc2&index=8&keys=&wait=400ms (424.895667ms) from=127.0.0.1:41351
2016/03/17 06:35:08 [DEBUG] http: Request PUT /v1/session/destroy/ba7f7a53-653f-7282-eeea-e3050a5a1594?dc=dc2 (219.007666ms) from=127.0.0.1:41353
2016/03/17 06:35:08 [DEBUG] http: Request DELETE /v1/kv/_rexec/ba7f7a53-653f-7282-eeea-e3050a5a1594?dc=dc2&recurse= (154.585ms) from=127.0.0.1:41351
2016/03/17 06:35:08 [DEBUG] http: Request PUT /v1/session/destroy/ba7f7a53-653f-7282-eeea-e3050a5a1594?dc=dc2 (149.38ms) from=127.0.0.1:41351
2016/03/17 06:35:08 [DEBUG] http: Shutting down http server (127.0.0.1:10441)
2016/03/17 06:35:08 [DEBUG] http: Shutting down http server (127.0.0.1:10431)
--- PASS: TestExecCommandRun_CrossDC (5.02s)
=== RUN   TestExecCommand_Validate
--- PASS: TestExecCommand_Validate (0.00s)
=== RUN   TestExecCommand_Sessions
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (322µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (325.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (321µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (322µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (317.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (319.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (306.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (328.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (325.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (378µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (317.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (357µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (326µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (329.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (340µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (297.334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (327.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (302.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (308.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (305.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (304.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (323.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (291.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (336µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (283.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (257µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (266.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (317.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (314.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (281.334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (250.334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (297µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (309.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (308.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (272.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (348.666µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (373µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (299µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (284.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (280.334µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (259.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (372.333µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:34878
2016/03/17 06:35:09 [DEBUG] http: Request GET /v1/catalog/nodes (2.371667ms) from=127.0.0.1:34878
2016/03/17 06:35:10 [DEBUG] http: Request GET /v1/catalog/nodes (304.333µs) from=127.0.0.1:34878
2016/03/17 06:35:10 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:34878
2016/03/17 06:35:10 [DEBUG] http: Request GET /v1/catalog/nodes (350.334µs) from=127.0.0.1:34878
2016/03/17 06:35:10 [DEBUG] http: Request PUT /v1/session/create (191.226ms) from=127.0.0.1:34879
2016/03/17 06:35:10 [DEBUG] http: Request GET /v1/session/info/bd387a52-6682-ea4f-7c77-583be6eb4b2f (761.667µs) from=127.0.0.1:34879
2016/03/17 06:35:10 [DEBUG] http: Request PUT /v1/session/destroy/bd387a52-6682-ea4f-7c77-583be6eb4b2f (187.604333ms) from=127.0.0.1:34879
2016/03/17 06:35:10 [DEBUG] http: Request GET /v1/session/info/bd387a52-6682-ea4f-7c77-583be6eb4b2f (296µs) from=127.0.0.1:34880
2016/03/17 06:35:10 [DEBUG] http: Shutting down http server (127.0.0.1:10451)
--- PASS: TestExecCommand_Sessions (1.97s)
=== RUN   TestExecCommand_Sessions_Foreign
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (281.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (281.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (292.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (278.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (321.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (328µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (274.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (268.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (265.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (272.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (290.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (291.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (335.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (288µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (354µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (243.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (245µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (245.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (242µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (266.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (248.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (335µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (249.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (252.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (258µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (318.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (246.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (240µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (326.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (302.334µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (267µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (262.333µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (269µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (279.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (311.667µs) from=127.0.0.1:37167
2016/03/17 06:35:11 [DEBUG] http: Request GET /v1/catalog/nodes (381.667µs) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/catalog/nodes (266.333µs) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/catalog/nodes (313.333µs) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/catalog/nodes (5.468667ms) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/catalog/nodes (454.666µs) from=127.0.0.1:37167
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/health/service/consul?passing=1 (813.666µs) from=127.0.0.1:37168
2016/03/17 06:35:12 [DEBUG] http: Request PUT /v1/session/create (145.489667ms) from=127.0.0.1:37168
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/session/info/4cb586bd-5dea-0b3c-0630-dde2c476bbe1 (331.666µs) from=127.0.0.1:37168
2016/03/17 06:35:12 [DEBUG] http: Request PUT /v1/session/destroy/4cb586bd-5dea-0b3c-0630-dde2c476bbe1 (163.041667ms) from=127.0.0.1:37168
2016/03/17 06:35:12 [DEBUG] http: Request GET /v1/session/info/4cb586bd-5dea-0b3c-0630-dde2c476bbe1 (278µs) from=127.0.0.1:37169
2016/03/17 06:35:12 [DEBUG] http: Shutting down http server (127.0.0.1:10461)
--- PASS: TestExecCommand_Sessions_Foreign (2.00s)
=== RUN   TestExecCommand_UploadDestroy
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (388µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (361.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (323.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (333.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (347µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (362.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (299.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (299µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (304.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (357.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (367µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (345.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (329.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (308µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (314.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (368µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (315.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (325.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (322.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (339µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (306.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (370µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (329.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (315µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (318.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (380.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (327.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (322.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (312.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (309.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (398µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (359.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (285.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (276.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (396.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (304.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (367.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (328.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (305.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (286.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (285.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (259.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (257.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (265µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (302µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (296.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (307.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (314.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (380.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (267.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (300.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (261.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (341.333µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (263.334µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (266.667µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:38719
2016/03/17 06:35:13 [DEBUG] http: Request GET /v1/catalog/nodes (283.666µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (277.333µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (257.333µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (254.666µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (256.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (246.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/catalog/nodes (329.333µs) from=127.0.0.1:38719
2016/03/17 06:35:14 [DEBUG] http: Request PUT /v1/session/create (204.963ms) from=127.0.0.1:38720
2016/03/17 06:35:14 [DEBUG] http: Request PUT /v1/kv/_rexec/c8631de4-232b-2058-9fd1-2a2e48c6865f/job?acquire=c8631de4-232b-2058-9fd1-2a2e48c6865f (247.862667ms) from=127.0.0.1:38720
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/kv/_rexec/c8631de4-232b-2058-9fd1-2a2e48c6865f/job (417.333µs) from=127.0.0.1:38720
2016/03/17 06:35:14 [DEBUG] http: Request DELETE /v1/kv/_rexec/c8631de4-232b-2058-9fd1-2a2e48c6865f?recurse= (246.332333ms) from=127.0.0.1:38720
2016/03/17 06:35:14 [DEBUG] http: Request GET /v1/kv/_rexec/c8631de4-232b-2058-9fd1-2a2e48c6865f/job (271µs) from=127.0.0.1:38720
2016/03/17 06:35:14 [DEBUG] http: Shutting down http server (127.0.0.1:10471)
--- PASS: TestExecCommand_UploadDestroy (2.47s)
=== RUN   TestExecCommand_StreamResults
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (411.666µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (419.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (337.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (321.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (354.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (359.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (301.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (510.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (373.334µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (348.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (308.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (320.666µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (312.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (314.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (338µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (315.666µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (326µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (315.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (409.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (354.666µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (328.667µs) from=127.0.0.1:59259
2016/03/17 06:35:15 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:15 [DEBUG] http: Request GET /v1/catalog/nodes (336.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (334.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (296.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (332.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (296.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (383.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (239.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (243µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (373µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (311µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (315.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (290.666µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (455.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (291.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (282.666µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (337µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (305µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (362µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (257.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (351.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (251.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (292.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (365.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (339.334µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (364.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (348.667µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/catalog/nodes (360.333µs) from=127.0.0.1:59259
2016/03/17 06:35:16 [DEBUG] http: Request PUT /v1/session/create (175.660334ms) from=127.0.0.1:59260
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?keys= (552.667µs) from=127.0.0.1:59261
2016/03/17 06:35:16 [DEBUG] http: Request PUT /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/ack?acquire=ef0373b6-d628-304f-9fbc-3f5738b3ae04 (172.493667ms) from=127.0.0.1:59260
2016/03/17 06:35:16 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?index=1&keys= (168.415333ms) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request PUT /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/exit?acquire=ef0373b6-d628-304f-9fbc-3f5738b3ae04 (230.444ms) from=127.0.0.1:59260
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?index=5&keys= (231.239334ms) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/exit (397.333µs) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request PUT /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/random?acquire=ef0373b6-d628-304f-9fbc-3f5738b3ae04 (236.017ms) from=127.0.0.1:59260
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?index=6&keys= (237.376667ms) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request PUT /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/out/00000?acquire=ef0373b6-d628-304f-9fbc-3f5738b3ae04 (164.679ms) from=127.0.0.1:59260
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?index=7&keys= (165.526666ms) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/out/00000 (527.666µs) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request PUT /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/out/00001?acquire=ef0373b6-d628-304f-9fbc-3f5738b3ae04 (170.174ms) from=127.0.0.1:59260
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/?index=8&keys= (168.196334ms) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Request GET /v1/kv/_rexec/ef0373b6-d628-304f-9fbc-3f5738b3ae04/foo/out/00001 (384.667µs) from=127.0.0.1:59261
2016/03/17 06:35:17 [DEBUG] http: Shutting down http server (127.0.0.1:10481)
--- PASS: TestExecCommand_StreamResults (2.87s)
=== RUN   TestForceLeaveCommand_implements
--- PASS: TestForceLeaveCommand_implements (0.00s)
=== RUN   TestForceLeaveCommandRun
2016/03/17 06:35:19 [ERR] http: Request PUT /v1/session/renew/c8631de4-232b-2058-9fd1-2a2e48c6865f, error: No cluster leader from=127.0.0.1:38720
2016/03/17 06:35:19 [DEBUG] http: Request PUT /v1/session/renew/c8631de4-232b-2058-9fd1-2a2e48c6865f (235µs) from=127.0.0.1:38720
2016/03/17 06:35:19 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/03/17 06:35:19 [INFO] agent.rpc: Accepted client: 127.0.0.1:58212
2016/03/17 06:35:19 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/03/17 06:35:20 [DEBUG] http: Shutting down http server (127.0.0.1:10491)
--- PASS: TestForceLeaveCommandRun (2.14s)
=== RUN   TestForceLeaveCommandRun_noAddrs
--- PASS: TestForceLeaveCommandRun_noAddrs (0.00s)
=== RUN   TestInfoCommand_implements
--- PASS: TestInfoCommand_implements (0.00s)
=== RUN   TestInfoCommandRun
2016/03/17 06:35:20 [INFO] agent.rpc: Accepted client: 127.0.0.1:52705
2016/03/17 06:35:21 [DEBUG] http: Shutting down http server (127.0.0.1:10511)
--- PASS: TestInfoCommandRun (1.11s)
=== RUN   TestJoinCommand_implements
--- PASS: TestJoinCommand_implements (0.00s)
=== RUN   TestJoinCommandRun
2016/03/17 06:35:21 [ERR] http: Request PUT /v1/session/renew/ef0373b6-d628-304f-9fbc-3f5738b3ae04, error: No cluster leader from=127.0.0.1:59260
2016/03/17 06:35:21 [DEBUG] http: Request PUT /v1/session/renew/ef0373b6-d628-304f-9fbc-3f5738b3ae04 (260.334µs) from=127.0.0.1:59260
2016/03/17 06:35:22 [INFO] agent.rpc: Accepted client: 127.0.0.1:41786
2016/03/17 06:35:23 [DEBUG] http: Shutting down http server (127.0.0.1:10531)
2016/03/17 06:35:23 [DEBUG] http: Shutting down http server (127.0.0.1:10521)
--- PASS: TestJoinCommandRun (2.36s)
=== RUN   TestJoinCommandRun_wan
2016/03/17 06:35:24 [INFO] agent.rpc: Accepted client: 127.0.0.1:60171
2016/03/17 06:35:25 [DEBUG] http: Shutting down http server (127.0.0.1:10551)
2016/03/17 06:35:25 [DEBUG] http: Shutting down http server (127.0.0.1:10541)
--- PASS: TestJoinCommandRun_wan (2.07s)
=== RUN   TestJoinCommandRun_noAddrs
--- PASS: TestJoinCommandRun_noAddrs (0.00s)
=== RUN   TestKeygenCommand_implements
--- PASS: TestKeygenCommand_implements (0.00s)
=== RUN   TestKeygenCommand
--- PASS: TestKeygenCommand (0.00s)
=== RUN   TestKeyringCommand_implements
--- PASS: TestKeyringCommand_implements (0.00s)
=== RUN   TestKeyringCommandRun
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40255
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40257
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40258
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40259
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40260
2016/03/17 06:35:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:40261
2016/03/17 06:35:27 [DEBUG] http: Shutting down http server (127.0.0.1:10561)
--- PASS: TestKeyringCommandRun (2.11s)
=== RUN   TestKeyringCommandRun_help
--- PASS: TestKeyringCommandRun_help (0.00s)
=== RUN   TestKeyringCommandRun_failedConnection
--- PASS: TestKeyringCommandRun_failedConnection (0.00s)
=== RUN   TestLeaveCommand_implements
--- PASS: TestLeaveCommand_implements (0.00s)
=== RUN   TestLeaveCommandRun
2016/03/17 06:35:28 [INFO] agent.rpc: Accepted client: 127.0.0.1:39842
2016/03/17 06:35:28 [INFO] agent.rpc: Graceful leave triggered
2016/03/17 06:35:28 [DEBUG] http: Shutting down http server (127.0.0.1:10571)
--- PASS: TestLeaveCommandRun (1.07s)
=== RUN   TestLockCommand_implements
--- PASS: TestLockCommand_implements (0.00s)
=== RUN   TestLockCommand_BadArgs
--- PASS: TestLockCommand_BadArgs (0.00s)
=== RUN   TestLockCommand_Run
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (6.284667ms) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (299.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (319.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (348.666µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (306.666µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (311.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (385.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (310.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (267.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (445.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (276.666µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (273.666µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (306.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (267.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (301.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (329.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (367µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (280µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (348µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (317.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (370.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (300.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (283.666µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (269µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (350.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (422µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (303.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (403µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (363.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (403.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (298.334µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (414.667µs) from=127.0.0.1:43939
2016/03/17 06:35:29 [DEBUG] http: Request GET /v1/catalog/nodes (406.667µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (303.666µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (397.666µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (353.333µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (351.334µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (411µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (308.334µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (335.334µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (338.667µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (326.333µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (301µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (389µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (337.334µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/catalog/nodes (340.334µs) from=127.0.0.1:43939
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/agent/self (938.666µs) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request PUT /v1/session/create (151.635ms) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (347.333µs) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=5f5d7f7b-fdf1-cc6c-57d0-ef9f723898db&flags=3304740253564472344 (204.166667ms) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.055667ms) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=5f5d7f7b-fdf1-cc6c-57d0-ef9f723898db (191.627ms) from=127.0.0.1:43941
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (217.479334ms) from=127.0.0.1:43940
2016/03/17 06:35:30 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (406.333µs) from=127.0.0.1:43941
2016/03/17 06:35:30 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (138.849666ms) from=127.0.0.1:43941
2016/03/17 06:35:31 [DEBUG] http: Request PUT /v1/session/destroy/5f5d7f7b-fdf1-cc6c-57d0-ef9f723898db (280.268333ms) from=127.0.0.1:43942
2016/03/17 06:35:31 [DEBUG] http: Shutting down http server (127.0.0.1:10581)
--- PASS: TestLockCommand_Run (2.40s)
=== RUN   TestLockCommand_Try_Lock
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (353.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (377.334µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (458.334µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (491µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (403.333µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (465.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (322.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (334.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (340.666µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (308.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (382.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (317µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (384.666µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (304.333µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (344.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (331.333µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (324µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (364.666µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (317.667µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (296µs) from=127.0.0.1:51973
2016/03/17 06:35:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:31 [DEBUG] http: Request GET /v1/catalog/nodes (297.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (377.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (327µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (405µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (298.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (316.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (314.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (328.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (305.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (320µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (277.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (286.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (279.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (265.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (256.666µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (345.666µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (269.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (244.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (388.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (413.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (302.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (262.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (249.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (266.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (238µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (239.666µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (251µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (250µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (254.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (267µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (251.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (292.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (295.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (249.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (244µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (359.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (253.666µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (258.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (248.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (419µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (252.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (262.333µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (267.334µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (262µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (273.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/catalog/nodes (298.667µs) from=127.0.0.1:51973
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/agent/self (748µs) from=127.0.0.1:51974
2016/03/17 06:35:32 [DEBUG] http: Request PUT /v1/session/create (190.907667ms) from=127.0.0.1:51974
2016/03/17 06:35:32 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=10000ms (325µs) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=7febe200-4378-db68-76e5-9ae298919b43&flags=3304740253564472344 (270.616ms) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.312334ms) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=7febe200-4378-db68-76e5-9ae298919b43 (271.342666ms) from=127.0.0.1:51975
2016/03/17 06:35:33 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (300.637ms) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (419.333µs) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Request PUT /v1/session/destroy/7febe200-4378-db68-76e5-9ae298919b43 (331.393ms) from=127.0.0.1:51975
2016/03/17 06:35:33 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (329.669666ms) from=127.0.0.1:51974
2016/03/17 06:35:33 [DEBUG] http: Shutting down http server (127.0.0.1:10591)
--- PASS: TestLockCommand_Try_Lock (2.85s)
=== RUN   TestLockCommand_Try_Semaphore
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (362.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (307µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (351.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (305µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (344µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (292µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (285.334µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (285.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (301.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (337.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (274.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (355µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (298.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (408µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (330µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (372.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (318.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (319.666µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (337.667µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (334µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (387.334µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (388.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (297.334µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (334.333µs) from=127.0.0.1:46593
2016/03/17 06:35:34 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:34 [DEBUG] http: Request GET /v1/catalog/nodes (404µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (326.666µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (338.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (309µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (369.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (312.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (296.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (362.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (265µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (294.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (281.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (388µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (258.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (321µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (364µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (339µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (331.334µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (286µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (274.666µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (324.666µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (321.334µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (271.666µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (337.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (276.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (339.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (274.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (345.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (350.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (298.667µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/catalog/nodes (342.333µs) from=127.0.0.1:46593
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/agent/self (755.333µs) from=127.0.0.1:46594
2016/03/17 06:35:35 [DEBUG] http: Request PUT /v1/session/create (154.262ms) from=127.0.0.1:46594
2016/03/17 06:35:35 [DEBUG] http: Request PUT /v1/kv/test/prefix/57276c44-112e-5b42-0edb-491ddb7ff132?acquire=57276c44-112e-5b42-0edb-491ddb7ff132&flags=16210313421097356768 (221.929ms) from=127.0.0.1:46594
2016/03/17 06:35:35 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=10000ms (470µs) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (229.419333ms) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.667667ms) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (402.666µs) from=127.0.0.1:46595
2016/03/17 06:35:36 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (163.208333ms) from=127.0.0.1:46595
2016/03/17 06:35:36 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (189.201334ms) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request DELETE /v1/kv/test/prefix/57276c44-112e-5b42-0edb-491ddb7ff132 (156.706334ms) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (422.334µs) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (310.520666ms) from=127.0.0.1:46594
2016/03/17 06:35:36 [DEBUG] http: Request PUT /v1/session/destroy/57276c44-112e-5b42-0edb-491ddb7ff132 (315.531667ms) from=127.0.0.1:46595
2016/03/17 06:35:37 [DEBUG] http: Shutting down http server (127.0.0.1:10601)
--- PASS: TestLockCommand_Try_Semaphore (3.03s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Default
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (298.667µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (261µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (270.666µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (271.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (325.334µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (289.333µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (307.334µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (378.667µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (349.667µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (320µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (368.667µs) from=127.0.0.1:40792
2016/03/17 06:35:37 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:37 [DEBUG] http: Request GET /v1/catalog/nodes (305.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (300.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (344µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (323.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (372µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (307.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (313.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (332.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (353.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (306.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (432.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (304.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (361.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (320.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (342.666µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (301µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (303.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (311.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (320.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (380.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (253.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (283.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (329µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (312µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (258.666µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (312.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (283.666µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (388.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (311µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (325.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (304.666µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (288.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (273µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (366µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (325µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (302µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (415µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (459µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (329µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (343µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (346.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (410.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (350.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (331µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (416.333µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (293.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/catalog/nodes (457µs) from=127.0.0.1:40792
2016/03/17 06:35:38 [DEBUG] http: Request GET /v1/agent/self (828.334µs) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request PUT /v1/session/create (178.807333ms) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (336µs) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=163b63c5-74da-3bb3-a6e0-d70191c77136&flags=3304740253564472344 (218.291667ms) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.703334ms) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=163b63c5-74da-3bb3-a6e0-d70191c77136 (217.945334ms) from=127.0.0.1:40794
2016/03/17 06:35:39 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (238.185666ms) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (358.667µs) from=127.0.0.1:40794
2016/03/17 06:35:39 [DEBUG] http: Request PUT /v1/session/destroy/163b63c5-74da-3bb3-a6e0-d70191c77136 (306.454333ms) from=127.0.0.1:40793
2016/03/17 06:35:39 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (306.533ms) from=127.0.0.1:40794
2016/03/17 06:35:39 [DEBUG] http: Shutting down http server (127.0.0.1:10611)
--- PASS: TestLockCommand_MonitorRetry_Lock_Default (2.92s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Default
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (294.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (303.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (471.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (421.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (552µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (316.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (300.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (303µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (395.334µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (392.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (393.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (479.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (359.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (414.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (411µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (432.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (291.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (311µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (305.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (435.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (364.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (432µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (419.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (287.334µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (387µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (308.333µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (337µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (326.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (279.666µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (325.334µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (295µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (315.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (406µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (304.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (389.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (282.667µs) from=127.0.0.1:51951
2016/03/17 06:35:40 [DEBUG] http: Request GET /v1/catalog/nodes (305.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (400.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (253.666µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (439.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (445.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (293.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (306.334µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (301.666µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (366.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (345.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (343.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (414.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (328µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (316.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (344.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (344µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (387.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (547µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (327.666µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (465µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (318.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (457.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (301.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (509µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (262.666µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (529.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (336.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (403.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (394.333µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (243.667µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/catalog/nodes (448µs) from=127.0.0.1:51951
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/agent/self (749.333µs) from=127.0.0.1:51952
2016/03/17 06:35:41 [DEBUG] http: Request PUT /v1/session/create (168.606ms) from=127.0.0.1:51952
2016/03/17 06:35:41 [DEBUG] http: Request PUT /v1/kv/test/prefix/4a8d6955-7a09-48fd-dc6d-aa2357a243c4?acquire=4a8d6955-7a09-48fd-dc6d-aa2357a243c4&flags=16210313421097356768 (237.368333ms) from=127.0.0.1:51952
2016/03/17 06:35:41 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (624.667µs) from=127.0.0.1:51952
2016/03/17 06:35:42 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (212.254ms) from=127.0.0.1:51952
2016/03/17 06:35:42 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.393333ms) from=127.0.0.1:51952
2016/03/17 06:35:42 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (517µs) from=127.0.0.1:51953
2016/03/17 06:35:42 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (149.056666ms) from=127.0.0.1:51953
2016/03/17 06:35:42 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (166.977ms) from=127.0.0.1:51952
2016/03/17 06:35:42 [DEBUG] http: Request DELETE /v1/kv/test/prefix/4a8d6955-7a09-48fd-dc6d-aa2357a243c4 (156.198ms) from=127.0.0.1:51953
2016/03/17 06:35:42 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (515µs) from=127.0.0.1:51953
2016/03/17 06:35:42 [DEBUG] http: Request PUT /v1/session/destroy/4a8d6955-7a09-48fd-dc6d-aa2357a243c4 (157.070333ms) from=127.0.0.1:51952
2016/03/17 06:35:42 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (304.593333ms) from=127.0.0.1:51953
2016/03/17 06:35:42 [DEBUG] http: Shutting down http server (127.0.0.1:10621)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Default (2.88s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Arg
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (284.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (314.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (334.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (371.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (349.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (294.334µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (389µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (310.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (297.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (288.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (342.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (424.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (302.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (326µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (336.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (341.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (357µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (294µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (362µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (439µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (293.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (306.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (445µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (357µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (269.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (266µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (372.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (269µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (287.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (240.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (241.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (243µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (323µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (310µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (241.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (243.334µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (310.334µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (244µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (301.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (307.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (375.333µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (343.666µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (394µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (335.667µs) from=127.0.0.1:38609
2016/03/17 06:35:43 [DEBUG] http: Request GET /v1/catalog/nodes (262µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (392.333µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (265.666µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (323.334µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (346.667µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (238.666µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (342.334µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (253.667µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (343.667µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (389µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (303.667µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (396.667µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (304µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (261.333µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (384.333µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (371.334µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (394.334µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:38609
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/agent/self (727.333µs) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request PUT /v1/session/create (132.336667ms) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (372µs) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=ab7eabea-1d12-3660-4533-cc6bd8537f96&flags=3304740253564472344 (212.19ms) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.289ms) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=ab7eabea-1d12-3660-4533-cc6bd8537f96 (238.843333ms) from=127.0.0.1:38611
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (256.211667ms) from=127.0.0.1:38610
2016/03/17 06:35:44 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (374.667µs) from=127.0.0.1:38611
2016/03/17 06:35:45 [DEBUG] http: Request PUT /v1/session/destroy/ab7eabea-1d12-3660-4533-cc6bd8537f96 (164.476667ms) from=127.0.0.1:38610
2016/03/17 06:35:45 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (370.134333ms) from=127.0.0.1:38611
2016/03/17 06:35:45 [DEBUG] http: Shutting down http server (127.0.0.1:10631)
--- PASS: TestLockCommand_MonitorRetry_Lock_Arg (2.54s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Arg
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (410.667µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (387µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (410.666µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (290.667µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (426µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (419µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (457.333µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (332µs) from=127.0.0.1:57415
2016/03/17 06:35:45 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:45 [DEBUG] http: Request GET /v1/catalog/nodes (352.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (304.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (278.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (294.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (292.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (288µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (421µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (294.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (312.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (340.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (299.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (331µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (315.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (271µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (318.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (412.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (269.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (272.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (330.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (272.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (367µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (285µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (401.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (291.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (264µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (292.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (316.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (269.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (308.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (303.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (262.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (386.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (413.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (376µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (357.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (365µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (329.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (265.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (453µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (276µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (419µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (242.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (256.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (329µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (251.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (358.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (393µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (259.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (288.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (256µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (399.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (250.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (311.667µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (282.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (277.666µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (282.333µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (372.334µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/catalog/nodes (360µs) from=127.0.0.1:57415
2016/03/17 06:35:46 [DEBUG] http: Request GET /v1/agent/self (801.667µs) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request PUT /v1/session/create (173.737667ms) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request PUT /v1/kv/test/prefix/67acae52-b6c9-fb3f-a5e3-a8d80de9ba43?acquire=67acae52-b6c9-fb3f-a5e3-a8d80de9ba43&flags=16210313421097356768 (339.354ms) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (589µs) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (405.151ms) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.638667ms) from=127.0.0.1:57416
2016/03/17 06:35:47 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (366.667µs) from=127.0.0.1:57417
2016/03/17 06:35:48 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (301.606ms) from=127.0.0.1:57417
2016/03/17 06:35:48 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (314.997333ms) from=127.0.0.1:57416
2016/03/17 06:35:48 [DEBUG] http: Request DELETE /v1/kv/test/prefix/67acae52-b6c9-fb3f-a5e3-a8d80de9ba43 (314.825667ms) from=127.0.0.1:57417
2016/03/17 06:35:48 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (620.333µs) from=127.0.0.1:57417
2016/03/17 06:35:48 [DEBUG] http: Request PUT /v1/session/destroy/67acae52-b6c9-fb3f-a5e3-a8d80de9ba43 (272.464667ms) from=127.0.0.1:57416
2016/03/17 06:35:49 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (610.779ms) from=127.0.0.1:57417
2016/03/17 06:35:49 [DEBUG] http: Shutting down http server (127.0.0.1:10641)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Arg (4.05s)
=== RUN   TestMaintCommand_implements
--- PASS: TestMaintCommand_implements (0.00s)
=== RUN   TestMaintCommandRun_ConflictingArgs
--- PASS: TestMaintCommandRun_ConflictingArgs (0.00s)
=== RUN   TestMaintCommandRun_NoArgs
2016/03/17 06:35:50 [DEBUG] http: Request GET /v1/agent/self (671µs) from=127.0.0.1:53811
2016/03/17 06:35:50 [DEBUG] http: Request GET /v1/agent/checks (338.334µs) from=127.0.0.1:53811
2016/03/17 06:35:50 [DEBUG] http: Shutting down http server (127.0.0.1:10651)
--- PASS: TestMaintCommandRun_NoArgs (1.55s)
=== RUN   TestMaintCommandRun_EnableNodeMaintenance
2016/03/17 06:35:51 [DEBUG] http: Request GET /v1/agent/self (767.333µs) from=127.0.0.1:47080
2016/03/17 06:35:51 [ERR] agent: failed to sync changes: No cluster leader
2016/03/17 06:35:51 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=true&reason=broken (1.45ms) from=127.0.0.1:47080
2016/03/17 06:35:52 [DEBUG] http: Shutting down http server (127.0.0.1:10661)
--- PASS: TestMaintCommandRun_EnableNodeMaintenance (1.73s)
=== RUN   TestMaintCommandRun_DisableNodeMaintenance
2016/03/17 06:35:53 [DEBUG] http: Request GET /v1/agent/self (803.334µs) from=127.0.0.1:33143
2016/03/17 06:35:53 [ERR] agent: failed to sync changes: No cluster leader
2016/03/17 06:35:53 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=false (218µs) from=127.0.0.1:33143
2016/03/17 06:35:54 [DEBUG] http: Shutting down http server (127.0.0.1:10671)
--- PASS: TestMaintCommandRun_DisableNodeMaintenance (1.54s)
=== RUN   TestMaintCommandRun_EnableServiceMaintenance
2016/03/17 06:35:55 [DEBUG] http: Request GET /v1/agent/self (642.333µs) from=127.0.0.1:54615
2016/03/17 06:35:55 [ERR] agent: failed to sync changes: No cluster leader
2016/03/17 06:35:55 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=true&reason=broken (1.272ms) from=127.0.0.1:54615
2016/03/17 06:35:56 [DEBUG] http: Shutting down http server (127.0.0.1:10681)
--- PASS: TestMaintCommandRun_EnableServiceMaintenance (1.89s)
=== RUN   TestMaintCommandRun_DisableServiceMaintenance
2016/03/17 06:35:57 [DEBUG] http: Request GET /v1/agent/self (633.333µs) from=127.0.0.1:49968
2016/03/17 06:35:57 [ERR] agent: failed to sync changes: No cluster leader
2016/03/17 06:35:57 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=false (409µs) from=127.0.0.1:49968
2016/03/17 06:35:58 [DEBUG] http: Shutting down http server (127.0.0.1:10691)
--- PASS: TestMaintCommandRun_DisableServiceMaintenance (2.60s)
=== RUN   TestMaintCommandRun_ServiceMaintenance_NoService
2016/03/17 06:35:59 [DEBUG] http: Request GET /v1/agent/self (743.667µs) from=127.0.0.1:58192
2016/03/17 06:35:59 [DEBUG] http: Request PUT /v1/agent/service/maintenance/redis?enable=true&reason=broken (195µs) from=127.0.0.1:58192
2016/03/17 06:36:00 [DEBUG] http: Shutting down http server (127.0.0.1:10701)
--- PASS: TestMaintCommandRun_ServiceMaintenance_NoService (1.40s)
=== RUN   TestMembersCommand_implements
--- PASS: TestMembersCommand_implements (0.00s)
=== RUN   TestMembersCommandRun
2016/03/17 06:36:00 [INFO] agent.rpc: Accepted client: 127.0.0.1:49197
2016/03/17 06:36:01 [DEBUG] http: Shutting down http server (127.0.0.1:10711)
--- PASS: TestMembersCommandRun (1.43s)
=== RUN   TestMembersCommandRun_WAN
2016/03/17 06:36:02 [INFO] agent.rpc: Accepted client: 127.0.0.1:46001
2016/03/17 06:36:03 [DEBUG] http: Shutting down http server (127.0.0.1:10721)
--- PASS: TestMembersCommandRun_WAN (1.57s)
=== RUN   TestMembersCommandRun_statusFilter
2016/03/17 06:36:03 [INFO] agent.rpc: Accepted client: 127.0.0.1:37942
2016/03/17 06:36:04 [DEBUG] http: Shutting down http server (127.0.0.1:10731)
--- PASS: TestMembersCommandRun_statusFilter (1.39s)
=== RUN   TestMembersCommandRun_statusFilter_failed
2016/03/17 06:36:05 [INFO] agent.rpc: Accepted client: 127.0.0.1:41958
2016/03/17 06:36:06 [DEBUG] http: Shutting down http server (127.0.0.1:10741)
--- PASS: TestMembersCommandRun_statusFilter_failed (1.59s)
=== RUN   TestReloadCommand_implements
--- PASS: TestReloadCommand_implements (0.00s)
=== RUN   TestReloadCommandRun
2016/03/17 06:36:06 [INFO] agent.rpc: Accepted client: 127.0.0.1:34544
2016/03/17 06:36:07 [DEBUG] http: Shutting down http server (127.0.0.1:10751)
--- PASS: TestReloadCommandRun (1.45s)
=== RUN   TestAddrFlag_default
--- PASS: TestAddrFlag_default (0.00s)
=== RUN   TestAddrFlag_onlyEnv
--- PASS: TestAddrFlag_onlyEnv (0.00s)
=== RUN   TestAddrFlag_precedence
--- PASS: TestAddrFlag_precedence (0.00s)
=== RUN   TestRTTCommand_Implements
--- PASS: TestRTTCommand_Implements (0.00s)
=== RUN   TestRTTCommand_Run_BadArgs
--- PASS: TestRTTCommand_Run_BadArgs (0.00s)
=== RUN   TestRTTCommand_Run_LAN
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (284µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (278.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (330.334µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (323.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (386.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (357µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (344.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (362µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (325µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (352µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (310.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (258.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (289.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (349µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (360.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (280.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (274.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (283µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (453µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (321.334µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (334.334µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (287.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (449.334µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (283.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (329.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (293.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (313.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (338.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (303.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (332.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (282.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (376.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (281µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (433µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (302.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (360.334µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (358µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (278µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (338.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (413.333µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (333.667µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (300.666µs) from=127.0.0.1:41422
2016/03/17 06:36:08 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:08 [DEBUG] http: Request GET /v1/catalog/nodes (475.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (257.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (327µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (387.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (357.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (337µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (381µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (276.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (284.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (378µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (277.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (326µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (454.334µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (285.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (505.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (291.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (275.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (255.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (295.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (275.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (241µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (230.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (349.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (403.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (509µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (291.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (220.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (354.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (370.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (260µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (267µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (250.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (290.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (252.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (260.334µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (254µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (303.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (241.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (354.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (244.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (291µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (290µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (397.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (255.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (469.334µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (280.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (334.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (266.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (320µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (322.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (270.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (327.334µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (305µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (252.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (249.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (371.666µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (367.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (261.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (283.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (252.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (289µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (346µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (248µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (416.333µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (311µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (354µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (332.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (255.667µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (355.334µs) from=127.0.0.1:41422
2016/03/17 06:36:09 [DEBUG] http: Request GET /v1/catalog/nodes (307.333µs) from=127.0.0.1:41422
2016/03/17 06:36:10 [DEBUG] http: Request GET /v1/coordinate/nodes (507.667µs) from=127.0.0.1:41424
2016/03/17 06:36:10 [DEBUG] http: Shutting down http server (127.0.0.1:10761)
--- FAIL: TestRTTCommand_Run_LAN (2.99s)
	rtt_test.go:105: bad: 1: "Could not find a coordinate for node \"Node 36\"\n"
=== RUN   TestRTTCommand_Run_WAN
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (473.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (349.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (477.666µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (398.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (293µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (339µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (403µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (384.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (282.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (374.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (263.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (365.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (266.334µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (301.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (299.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (286.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (343µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (336.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (270.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (281.666µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (451µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (314.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (296.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (316µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (268.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (406µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (450.334µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (499µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (273.334µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (286.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (269.334µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (275µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (267.333µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (457µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (314µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (258.667µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (303.666µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (300µs) from=127.0.0.1:51077
2016/03/17 06:36:11 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:11 [DEBUG] http: Request GET /v1/catalog/nodes (254.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (370.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (298µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (345.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (273.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (343µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (319.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (426.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (274.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (334.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (306.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (332.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (2.501ms) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (253.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (260.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (268.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (379µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (363µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (282µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (272µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (472.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (276.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (319µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (335.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (392.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (264.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (259.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (270µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (263µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (297.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (279µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (264.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (440.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (271.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (259.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (323.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (297µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (257.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (251.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (444.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (268µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (262.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (287.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (323.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (322.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (244.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (267.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (274µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (446.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (330.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (319.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (287µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (438.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (430µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (259µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (295.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (289.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (277.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (364µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (251.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (374.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (417.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (266.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (398.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (313µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (365.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (318µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (534.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (275.334µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (368µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (331.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (353µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (478µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (278.333µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (264.667µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/catalog/nodes (491.666µs) from=127.0.0.1:51077
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/coordinate/datacenters (858µs) from=127.0.0.1:51079
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/agent/self (686.667µs) from=127.0.0.1:51080
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/coordinate/datacenters (224µs) from=127.0.0.1:51080
2016/03/17 06:36:12 [DEBUG] http: Request GET /v1/coordinate/datacenters (286.666µs) from=127.0.0.1:51081
2016/03/17 06:36:13 [DEBUG] http: Shutting down http server (127.0.0.1:10771)
--- PASS: TestRTTCommand_Run_WAN (2.66s)
=== RUN   TestVersionCommand_implements
--- PASS: TestVersionCommand_implements (0.00s)
=== RUN   TestWatchCommand_implements
--- PASS: TestWatchCommand_implements (0.00s)
=== RUN   TestWatchCommandRun
2016/03/17 06:36:14 [DEBUG] http: Request GET /v1/agent/self (668µs) from=127.0.0.1:51689
2016/03/17 06:36:14 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:51690
2016/03/17 06:36:14 [DEBUG] http: Request GET /v1/catalog/nodes (277µs) from=127.0.0.1:51690
2016/03/17 06:36:14 consul.watch: Watch (type: nodes) errored: Unexpected response code: 500 (No cluster leader), retry in 5s
2016/03/17 06:36:19 [DEBUG] http: Request GET /v1/catalog/nodes (619.333µs) from=127.0.0.1:51690
2016/03/17 06:36:19 [DEBUG] http: Shutting down http server (127.0.0.1:10781)
--- PASS: TestWatchCommandRun (6.17s)
FAIL
FAIL	github.com/hashicorp/consul/command	80.769s
FAIL	github.com/hashicorp/consul/command/agent [build failed]
=== RUN   TestACLEndpoint_Apply
2016/03/17 06:35:48 [INFO] raft: Node at 127.0.0.1:15001 [Follower] entering Follower state
2016/03/17 06:35:48 [INFO] serf: EventMemberJoin: Node 15000 127.0.0.1
2016/03/17 06:35:48 [INFO] serf: EventMemberJoin: Node 15000.dc1 127.0.0.1
2016/03/17 06:35:48 [INFO] consul: adding LAN server Node 15000 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/03/17 06:35:48 [INFO] consul: adding WAN server Node 15000.dc1 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/03/17 06:35:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:35:48 [INFO] raft: Node at 127.0.0.1:15001 [Candidate] entering Candidate state
2016/03/17 06:35:49 [DEBUG] raft: Votes needed: 1
2016/03/17 06:35:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:35:49 [INFO] raft: Election won. Tally: 1
2016/03/17 06:35:49 [INFO] raft: Node at 127.0.0.1:15001 [Leader] entering Leader state
2016/03/17 06:35:49 [INFO] consul: cluster leadership acquired
2016/03/17 06:35:49 [INFO] consul: New leader elected: Node 15000
2016/03/17 06:35:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:35:49 [DEBUG] raft: Node 127.0.0.1:15001 updated peer set (2): [127.0.0.1:15001]
2016/03/17 06:35:49 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:35:50 [INFO] consul: member 'Node 15000' joined, marking health alive
2016/03/17 06:35:52 [INFO] consul: shutting down server
2016/03/17 06:35:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:35:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:35:52 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply (5.09s)
=== RUN   TestACLEndpoint_Update_PurgeCache
2016/03/17 06:35:53 [INFO] raft: Node at 127.0.0.1:15005 [Follower] entering Follower state
2016/03/17 06:35:53 [INFO] serf: EventMemberJoin: Node 15004 127.0.0.1
2016/03/17 06:35:53 [INFO] consul: adding LAN server Node 15004 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/03/17 06:35:53 [INFO] serf: EventMemberJoin: Node 15004.dc1 127.0.0.1
2016/03/17 06:35:53 [INFO] consul: adding WAN server Node 15004.dc1 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/03/17 06:35:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:35:53 [INFO] raft: Node at 127.0.0.1:15005 [Candidate] entering Candidate state
2016/03/17 06:35:54 [DEBUG] raft: Votes needed: 1
2016/03/17 06:35:54 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:35:54 [INFO] raft: Election won. Tally: 1
2016/03/17 06:35:54 [INFO] raft: Node at 127.0.0.1:15005 [Leader] entering Leader state
2016/03/17 06:35:54 [INFO] consul: cluster leadership acquired
2016/03/17 06:35:54 [INFO] consul: New leader elected: Node 15004
2016/03/17 06:35:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:35:54 [DEBUG] raft: Node 127.0.0.1:15005 updated peer set (2): [127.0.0.1:15005]
2016/03/17 06:35:54 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:35:55 [INFO] consul: member 'Node 15004' joined, marking health alive
2016/03/17 06:35:58 [INFO] consul: shutting down server
2016/03/17 06:35:58 [WARN] serf: Shutdown without a Leave
2016/03/17 06:35:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestACLEndpoint_Update_PurgeCache (6.48s)
=== RUN   TestACLEndpoint_Apply_CustomID
2016/03/17 06:35:59 [INFO] raft: Node at 127.0.0.1:15009 [Follower] entering Follower state
2016/03/17 06:35:59 [INFO] serf: EventMemberJoin: Node 15008 127.0.0.1
2016/03/17 06:35:59 [INFO] consul: adding LAN server Node 15008 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/03/17 06:35:59 [INFO] serf: EventMemberJoin: Node 15008.dc1 127.0.0.1
2016/03/17 06:35:59 [INFO] consul: adding WAN server Node 15008.dc1 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/03/17 06:35:59 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:35:59 [INFO] raft: Node at 127.0.0.1:15009 [Candidate] entering Candidate state
2016/03/17 06:36:00 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:00 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:00 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:00 [INFO] raft: Node at 127.0.0.1:15009 [Leader] entering Leader state
2016/03/17 06:36:00 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:00 [INFO] consul: New leader elected: Node 15008
2016/03/17 06:36:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:01 [DEBUG] raft: Node 127.0.0.1:15009 updated peer set (2): [127.0.0.1:15009]
2016/03/17 06:36:01 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:01 [INFO] consul: member 'Node 15008' joined, marking health alive
2016/03/17 06:36:02 [INFO] consul: shutting down server
2016/03/17 06:36:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:02 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:36:02 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_CustomID (3.56s)
=== RUN   TestACLEndpoint_Apply_Denied
2016/03/17 06:36:03 [INFO] raft: Node at 127.0.0.1:15013 [Follower] entering Follower state
2016/03/17 06:36:03 [INFO] serf: EventMemberJoin: Node 15012 127.0.0.1
2016/03/17 06:36:03 [INFO] consul: adding LAN server Node 15012 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/03/17 06:36:03 [INFO] serf: EventMemberJoin: Node 15012.dc1 127.0.0.1
2016/03/17 06:36:03 [INFO] consul: adding WAN server Node 15012.dc1 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/03/17 06:36:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:03 [INFO] raft: Node at 127.0.0.1:15013 [Candidate] entering Candidate state
2016/03/17 06:36:04 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:04 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:04 [INFO] raft: Node at 127.0.0.1:15013 [Leader] entering Leader state
2016/03/17 06:36:04 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:04 [INFO] consul: New leader elected: Node 15012
2016/03/17 06:36:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:04 [DEBUG] raft: Node 127.0.0.1:15013 updated peer set (2): [127.0.0.1:15013]
2016/03/17 06:36:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:04 [INFO] consul: member 'Node 15012' joined, marking health alive
2016/03/17 06:36:05 [INFO] consul: shutting down server
2016/03/17 06:36:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply_Denied (2.73s)
=== RUN   TestACLEndpoint_Apply_DeleteAnon
2016/03/17 06:36:06 [INFO] raft: Node at 127.0.0.1:15017 [Follower] entering Follower state
2016/03/17 06:36:06 [INFO] serf: EventMemberJoin: Node 15016 127.0.0.1
2016/03/17 06:36:06 [INFO] consul: adding LAN server Node 15016 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/03/17 06:36:06 [INFO] serf: EventMemberJoin: Node 15016.dc1 127.0.0.1
2016/03/17 06:36:06 [INFO] consul: adding WAN server Node 15016.dc1 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/03/17 06:36:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:06 [INFO] raft: Node at 127.0.0.1:15017 [Candidate] entering Candidate state
2016/03/17 06:36:07 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:07 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:07 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:07 [INFO] raft: Node at 127.0.0.1:15017 [Leader] entering Leader state
2016/03/17 06:36:07 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:07 [INFO] consul: New leader elected: Node 15016
2016/03/17 06:36:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:07 [DEBUG] raft: Node 127.0.0.1:15017 updated peer set (2): [127.0.0.1:15017]
2016/03/17 06:36:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:07 [INFO] consul: member 'Node 15016' joined, marking health alive
2016/03/17 06:36:08 [INFO] consul: shutting down server
2016/03/17 06:36:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:08 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply_DeleteAnon (3.52s)
=== RUN   TestACLEndpoint_Apply_RootChange
2016/03/17 06:36:09 [INFO] raft: Node at 127.0.0.1:15021 [Follower] entering Follower state
2016/03/17 06:36:09 [INFO] serf: EventMemberJoin: Node 15020 127.0.0.1
2016/03/17 06:36:09 [INFO] consul: adding LAN server Node 15020 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/03/17 06:36:09 [INFO] serf: EventMemberJoin: Node 15020.dc1 127.0.0.1
2016/03/17 06:36:09 [INFO] consul: adding WAN server Node 15020.dc1 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/03/17 06:36:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:09 [INFO] raft: Node at 127.0.0.1:15021 [Candidate] entering Candidate state
2016/03/17 06:36:10 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:10 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:10 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:10 [INFO] raft: Node at 127.0.0.1:15021 [Leader] entering Leader state
2016/03/17 06:36:10 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:10 [INFO] consul: New leader elected: Node 15020
2016/03/17 06:36:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:10 [DEBUG] raft: Node 127.0.0.1:15021 updated peer set (2): [127.0.0.1:15021]
2016/03/17 06:36:11 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:11 [INFO] consul: member 'Node 15020' joined, marking health alive
2016/03/17 06:36:11 [INFO] consul: shutting down server
2016/03/17 06:36:11 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:11 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:12 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:36:12 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_RootChange (3.04s)
=== RUN   TestACLEndpoint_Get
2016/03/17 06:36:12 [INFO] raft: Node at 127.0.0.1:15025 [Follower] entering Follower state
2016/03/17 06:36:12 [INFO] serf: EventMemberJoin: Node 15024 127.0.0.1
2016/03/17 06:36:12 [INFO] consul: adding LAN server Node 15024 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/03/17 06:36:12 [INFO] serf: EventMemberJoin: Node 15024.dc1 127.0.0.1
2016/03/17 06:36:12 [INFO] consul: adding WAN server Node 15024.dc1 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/03/17 06:36:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:12 [INFO] raft: Node at 127.0.0.1:15025 [Candidate] entering Candidate state
2016/03/17 06:36:13 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:13 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:13 [INFO] raft: Node at 127.0.0.1:15025 [Leader] entering Leader state
2016/03/17 06:36:13 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:13 [INFO] consul: New leader elected: Node 15024
2016/03/17 06:36:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:14 [DEBUG] raft: Node 127.0.0.1:15025 updated peer set (2): [127.0.0.1:15025]
2016/03/17 06:36:14 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:14 [INFO] consul: member 'Node 15024' joined, marking health alive
2016/03/17 06:36:15 [INFO] consul: shutting down server
2016/03/17 06:36:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:16 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:36:16 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Get (4.13s)
=== RUN   TestACLEndpoint_GetPolicy
2016/03/17 06:36:16 [INFO] raft: Node at 127.0.0.1:15029 [Follower] entering Follower state
2016/03/17 06:36:16 [INFO] serf: EventMemberJoin: Node 15028 127.0.0.1
2016/03/17 06:36:16 [INFO] consul: adding LAN server Node 15028 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/03/17 06:36:16 [INFO] serf: EventMemberJoin: Node 15028.dc1 127.0.0.1
2016/03/17 06:36:16 [INFO] consul: adding WAN server Node 15028.dc1 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/03/17 06:36:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:16 [INFO] raft: Node at 127.0.0.1:15029 [Candidate] entering Candidate state
2016/03/17 06:36:17 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:17 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:17 [INFO] raft: Node at 127.0.0.1:15029 [Leader] entering Leader state
2016/03/17 06:36:17 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:17 [INFO] consul: New leader elected: Node 15028
2016/03/17 06:36:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:17 [DEBUG] raft: Node 127.0.0.1:15029 updated peer set (2): [127.0.0.1:15029]
2016/03/17 06:36:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:18 [INFO] consul: member 'Node 15028' joined, marking health alive
2016/03/17 06:36:18 [INFO] consul: shutting down server
2016/03/17 06:36:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestACLEndpoint_GetPolicy (3.00s)
=== RUN   TestACLEndpoint_List
2016/03/17 06:36:19 [INFO] raft: Node at 127.0.0.1:15033 [Follower] entering Follower state
2016/03/17 06:36:19 [INFO] serf: EventMemberJoin: Node 15032 127.0.0.1
2016/03/17 06:36:19 [INFO] consul: adding LAN server Node 15032 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/03/17 06:36:19 [INFO] serf: EventMemberJoin: Node 15032.dc1 127.0.0.1
2016/03/17 06:36:19 [INFO] consul: adding WAN server Node 15032.dc1 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/03/17 06:36:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:19 [INFO] raft: Node at 127.0.0.1:15033 [Candidate] entering Candidate state
2016/03/17 06:36:20 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:20 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:20 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:20 [INFO] raft: Node at 127.0.0.1:15033 [Leader] entering Leader state
2016/03/17 06:36:20 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:20 [INFO] consul: New leader elected: Node 15032
2016/03/17 06:36:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:20 [DEBUG] raft: Node 127.0.0.1:15033 updated peer set (2): [127.0.0.1:15033]
2016/03/17 06:36:20 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:20 [INFO] consul: member 'Node 15032' joined, marking health alive
2016/03/17 06:36:23 [INFO] consul: shutting down server
2016/03/17 06:36:23 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:23 [WARN] serf: Shutdown without a Leave
--- PASS: TestACLEndpoint_List (4.26s)
=== RUN   TestACLEndpoint_List_Denied
2016/03/17 06:36:23 [INFO] raft: Node at 127.0.0.1:15037 [Follower] entering Follower state
2016/03/17 06:36:23 [INFO] serf: EventMemberJoin: Node 15036 127.0.0.1
2016/03/17 06:36:23 [INFO] consul: adding LAN server Node 15036 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/03/17 06:36:23 [INFO] serf: EventMemberJoin: Node 15036.dc1 127.0.0.1
2016/03/17 06:36:23 [INFO] consul: adding WAN server Node 15036.dc1 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/03/17 06:36:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:23 [INFO] raft: Node at 127.0.0.1:15037 [Candidate] entering Candidate state
2016/03/17 06:36:24 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:24 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:24 [INFO] raft: Node at 127.0.0.1:15037 [Leader] entering Leader state
2016/03/17 06:36:24 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:24 [INFO] consul: New leader elected: Node 15036
2016/03/17 06:36:24 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:24 [DEBUG] raft: Node 127.0.0.1:15037 updated peer set (2): [127.0.0.1:15037]
2016/03/17 06:36:24 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:24 [INFO] consul: member 'Node 15036' joined, marking health alive
2016/03/17 06:36:25 [INFO] consul: shutting down server
2016/03/17 06:36:25 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:25 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:25 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:36:25 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_List_Denied (2.03s)
=== RUN   TestACL_Disabled
2016/03/17 06:36:25 [INFO] raft: Node at 127.0.0.1:15041 [Follower] entering Follower state
2016/03/17 06:36:25 [INFO] serf: EventMemberJoin: Node 15040 127.0.0.1
2016/03/17 06:36:25 [INFO] consul: adding LAN server Node 15040 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/03/17 06:36:25 [INFO] serf: EventMemberJoin: Node 15040.dc1 127.0.0.1
2016/03/17 06:36:25 [INFO] consul: adding WAN server Node 15040.dc1 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/03/17 06:36:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:26 [INFO] raft: Node at 127.0.0.1:15041 [Candidate] entering Candidate state
2016/03/17 06:36:26 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:26 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:26 [INFO] raft: Node at 127.0.0.1:15041 [Leader] entering Leader state
2016/03/17 06:36:26 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:26 [INFO] consul: New leader elected: Node 15040
2016/03/17 06:36:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:26 [DEBUG] raft: Node 127.0.0.1:15041 updated peer set (2): [127.0.0.1:15041]
2016/03/17 06:36:26 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:26 [INFO] consul: member 'Node 15040' joined, marking health alive
2016/03/17 06:36:27 [INFO] consul: shutting down server
2016/03/17 06:36:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:27 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_Disabled (1.89s)
=== RUN   TestACL_ResolveRootACL
2016/03/17 06:36:27 [INFO] raft: Node at 127.0.0.1:15045 [Follower] entering Follower state
2016/03/17 06:36:27 [INFO] serf: EventMemberJoin: Node 15044 127.0.0.1
2016/03/17 06:36:27 [INFO] consul: adding LAN server Node 15044 (Addr: 127.0.0.1:15045) (DC: dc1)
2016/03/17 06:36:27 [INFO] serf: EventMemberJoin: Node 15044.dc1 127.0.0.1
2016/03/17 06:36:27 [INFO] consul: shutting down server
2016/03/17 06:36:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:27 [INFO] consul: adding WAN server Node 15044.dc1 (Addr: 127.0.0.1:15045) (DC: dc1)
2016/03/17 06:36:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:27 [INFO] raft: Node at 127.0.0.1:15045 [Candidate] entering Candidate state
2016/03/17 06:36:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:28 [DEBUG] raft: Votes needed: 1
--- PASS: TestACL_ResolveRootACL (1.42s)
=== RUN   TestACL_Authority_NotFound
2016/03/17 06:36:29 [INFO] raft: Node at 127.0.0.1:15049 [Follower] entering Follower state
2016/03/17 06:36:29 [INFO] serf: EventMemberJoin: Node 15048 127.0.0.1
2016/03/17 06:36:29 [INFO] consul: adding LAN server Node 15048 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/03/17 06:36:29 [INFO] serf: EventMemberJoin: Node 15048.dc1 127.0.0.1
2016/03/17 06:36:29 [INFO] consul: adding WAN server Node 15048.dc1 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/03/17 06:36:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:29 [INFO] raft: Node at 127.0.0.1:15049 [Candidate] entering Candidate state
2016/03/17 06:36:30 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:30 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:30 [INFO] raft: Node at 127.0.0.1:15049 [Leader] entering Leader state
2016/03/17 06:36:30 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:30 [INFO] consul: New leader elected: Node 15048
2016/03/17 06:36:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:30 [DEBUG] raft: Node 127.0.0.1:15049 updated peer set (2): [127.0.0.1:15049]
2016/03/17 06:36:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:30 [INFO] consul: member 'Node 15048' joined, marking health alive
2016/03/17 06:36:30 [INFO] consul: shutting down server
2016/03/17 06:36:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:31 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:36:31 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Authority_NotFound (2.54s)
=== RUN   TestACL_Authority_Found
2016/03/17 06:36:31 [INFO] raft: Node at 127.0.0.1:15053 [Follower] entering Follower state
2016/03/17 06:36:31 [INFO] serf: EventMemberJoin: Node 15052 127.0.0.1
2016/03/17 06:36:31 [INFO] consul: adding LAN server Node 15052 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/03/17 06:36:31 [INFO] serf: EventMemberJoin: Node 15052.dc1 127.0.0.1
2016/03/17 06:36:31 [INFO] consul: adding WAN server Node 15052.dc1 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/03/17 06:36:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:31 [INFO] raft: Node at 127.0.0.1:15053 [Candidate] entering Candidate state
2016/03/17 06:36:32 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:32 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:32 [INFO] raft: Node at 127.0.0.1:15053 [Leader] entering Leader state
2016/03/17 06:36:32 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:32 [INFO] consul: New leader elected: Node 15052
2016/03/17 06:36:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:32 [DEBUG] raft: Node 127.0.0.1:15053 updated peer set (2): [127.0.0.1:15053]
2016/03/17 06:36:32 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:32 [INFO] consul: member 'Node 15052' joined, marking health alive
2016/03/17 06:36:33 [INFO] consul: shutting down server
2016/03/17 06:36:33 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:33 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:33 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Found (2.48s)
=== RUN   TestACL_Authority_Anonymous_Found
2016/03/17 06:36:34 [INFO] raft: Node at 127.0.0.1:15057 [Follower] entering Follower state
2016/03/17 06:36:34 [INFO] serf: EventMemberJoin: Node 15056 127.0.0.1
2016/03/17 06:36:34 [INFO] consul: adding LAN server Node 15056 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/03/17 06:36:34 [INFO] serf: EventMemberJoin: Node 15056.dc1 127.0.0.1
2016/03/17 06:36:34 [INFO] consul: adding WAN server Node 15056.dc1 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/03/17 06:36:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:34 [INFO] raft: Node at 127.0.0.1:15057 [Candidate] entering Candidate state
2016/03/17 06:36:34 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:34 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:34 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:34 [INFO] raft: Node at 127.0.0.1:15057 [Leader] entering Leader state
2016/03/17 06:36:34 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:34 [INFO] consul: New leader elected: Node 15056
2016/03/17 06:36:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:34 [DEBUG] raft: Node 127.0.0.1:15057 updated peer set (2): [127.0.0.1:15057]
2016/03/17 06:36:35 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:35 [INFO] consul: member 'Node 15056' joined, marking health alive
2016/03/17 06:36:35 [INFO] consul: shutting down server
2016/03/17 06:36:35 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:35 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_Authority_Anonymous_Found (1.75s)
=== RUN   TestACL_Authority_Master_Found
2016/03/17 06:36:35 [INFO] raft: Node at 127.0.0.1:15061 [Follower] entering Follower state
2016/03/17 06:36:35 [INFO] serf: EventMemberJoin: Node 15060 127.0.0.1
2016/03/17 06:36:35 [INFO] consul: adding LAN server Node 15060 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/03/17 06:36:35 [INFO] serf: EventMemberJoin: Node 15060.dc1 127.0.0.1
2016/03/17 06:36:35 [INFO] consul: adding WAN server Node 15060.dc1 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/03/17 06:36:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:36 [INFO] raft: Node at 127.0.0.1:15061 [Candidate] entering Candidate state
2016/03/17 06:36:36 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:36 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:36 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:36 [INFO] raft: Node at 127.0.0.1:15061 [Leader] entering Leader state
2016/03/17 06:36:36 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:36 [INFO] consul: New leader elected: Node 15060
2016/03/17 06:36:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:36 [DEBUG] raft: Node 127.0.0.1:15061 updated peer set (2): [127.0.0.1:15061]
2016/03/17 06:36:36 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:37 [INFO] consul: member 'Node 15060' joined, marking health alive
2016/03/17 06:36:37 [INFO] consul: shutting down server
2016/03/17 06:36:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:37 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_Authority_Master_Found (2.12s)
=== RUN   TestACL_Authority_Management
2016/03/17 06:36:38 [INFO] raft: Node at 127.0.0.1:15065 [Follower] entering Follower state
2016/03/17 06:36:38 [INFO] serf: EventMemberJoin: Node 15064 127.0.0.1
2016/03/17 06:36:38 [INFO] consul: adding LAN server Node 15064 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/03/17 06:36:38 [INFO] serf: EventMemberJoin: Node 15064.dc1 127.0.0.1
2016/03/17 06:36:38 [INFO] consul: adding WAN server Node 15064.dc1 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/03/17 06:36:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:38 [INFO] raft: Node at 127.0.0.1:15065 [Candidate] entering Candidate state
2016/03/17 06:36:38 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:38 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:38 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:38 [INFO] raft: Node at 127.0.0.1:15065 [Leader] entering Leader state
2016/03/17 06:36:38 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:38 [INFO] consul: New leader elected: Node 15064
2016/03/17 06:36:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:38 [DEBUG] raft: Node 127.0.0.1:15065 updated peer set (2): [127.0.0.1:15065]
2016/03/17 06:36:38 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:39 [INFO] consul: member 'Node 15064' joined, marking health alive
2016/03/17 06:36:39 [INFO] consul: shutting down server
2016/03/17 06:36:39 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:39 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:39 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Management (2.29s)
=== RUN   TestACL_NonAuthority_NotFound
2016/03/17 06:36:40 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/03/17 06:36:40 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/03/17 06:36:40 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/17 06:36:40 [INFO] serf: EventMemberJoin: Node 15068.dc1 127.0.0.1
2016/03/17 06:36:40 [INFO] consul: adding WAN server Node 15068.dc1 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/17 06:36:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:40 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/17 06:36:41 [INFO] raft: Node at 127.0.0.1:15073 [Follower] entering Follower state
2016/03/17 06:36:41 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/03/17 06:36:41 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/17 06:36:41 [INFO] serf: EventMemberJoin: Node 15072.dc1 127.0.0.1
2016/03/17 06:36:41 [INFO] consul: adding WAN server Node 15072.dc1 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/17 06:36:41 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15070
2016/03/17 06:36:41 [DEBUG] memberlist: TCP connection from=127.0.0.1:48226
2016/03/17 06:36:41 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/03/17 06:36:41 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/17 06:36:41 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/03/17 06:36:41 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:41 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:41 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:41 [INFO] raft: Node at 127.0.0.1:15069 [Leader] entering Leader state
2016/03/17 06:36:41 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:41 [INFO] consul: New leader elected: Node 15068
2016/03/17 06:36:41 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [INFO] consul: New leader elected: Node 15068
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] serf: messageJoinType: Node 15072
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:41 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15069]
2016/03/17 06:36:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:41 [INFO] consul: member 'Node 15068' joined, marking health alive
2016/03/17 06:36:41 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069]
2016/03/17 06:36:41 [INFO] raft: Added peer 127.0.0.1:15073, starting replication
2016/03/17 06:36:41 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:48602
2016/03/17 06:36:41 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:48604
2016/03/17 06:36:41 [ERR] consul.acl: Failed to get policy for 'does not exist': No cluster leader
2016/03/17 06:36:41 [INFO] consul: shutting down server
2016/03/17 06:36:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:42 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/17 06:36:42 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:42 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/17 06:36:42 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/03/17 06:36:42 [DEBUG] raft: Failed to contact 127.0.0.1:15073 in 247.573333ms
2016/03/17 06:36:42 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:36:42 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:36:42 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/03/17 06:36:42 [INFO] consul: cluster leadership lost
2016/03/17 06:36:42 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:36:42 [ERR] consul: failed to reconcile member: {Node 15072 127.0.0.1 15074 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15073 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:36:42 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:36:42 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15073: EOF
2016/03/17 06:36:42 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/17 06:36:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:42 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/03/17 06:36:42 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/17 06:36:42 [INFO] memberlist: Marking Node 15072 as failed, suspect timeout reached
2016/03/17 06:36:42 [INFO] serf: EventMemberFailed: Node 15072 127.0.0.1
2016/03/17 06:36:42 [INFO] consul: removing LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/03/17 06:36:42 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/03/17 06:36:42 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/03/17 06:36:42 [INFO] consul: shutting down server
2016/03/17 06:36:42 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:42 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:36:42 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15073: EOF
2016/03/17 06:36:42 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:42 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_NonAuthority_NotFound (2.98s)
	acl_test.go:245: err: <nil>
=== RUN   TestACL_NonAuthority_Found
2016/03/17 06:36:43 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/03/17 06:36:43 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/03/17 06:36:43 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/17 06:36:43 [INFO] serf: EventMemberJoin: Node 15076.dc1 127.0.0.1
2016/03/17 06:36:43 [INFO] consul: adding WAN server Node 15076.dc1 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/17 06:36:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:43 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/03/17 06:36:44 [INFO] raft: Node at 127.0.0.1:15081 [Follower] entering Follower state
2016/03/17 06:36:44 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/03/17 06:36:44 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/17 06:36:44 [INFO] serf: EventMemberJoin: Node 15080.dc1 127.0.0.1
2016/03/17 06:36:44 [INFO] consul: adding WAN server Node 15080.dc1 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/17 06:36:44 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15078
2016/03/17 06:36:44 [DEBUG] memberlist: TCP connection from=127.0.0.1:36740
2016/03/17 06:36:44 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/03/17 06:36:44 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/03/17 06:36:44 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/03/17 06:36:44 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/03/17 06:36:44 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:44 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:44 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:44 [INFO] raft: Node at 127.0.0.1:15077 [Leader] entering Leader state
2016/03/17 06:36:44 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:44 [INFO] consul: New leader elected: Node 15076
2016/03/17 06:36:44 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [INFO] consul: New leader elected: Node 15076
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] serf: messageJoinType: Node 15080
2016/03/17 06:36:44 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15077]
2016/03/17 06:36:44 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:45 [INFO] consul: member 'Node 15076' joined, marking health alive
2016/03/17 06:36:45 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15081 127.0.0.1:15077]
2016/03/17 06:36:45 [INFO] raft: Added peer 127.0.0.1:15081, starting replication
2016/03/17 06:36:45 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:36801
2016/03/17 06:36:45 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:36802
2016/03/17 06:36:45 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/17 06:36:45 [WARN] raft: AppendEntries to 127.0.0.1:15081 rejected, sending older logs (next: 1)
2016/03/17 06:36:46 [DEBUG] raft: Node 127.0.0.1:15081 updated peer set (2): [127.0.0.1:15077]
2016/03/17 06:36:46 [INFO] raft: pipelining replication to peer 127.0.0.1:15081
2016/03/17 06:36:46 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15081 127.0.0.1:15077]
2016/03/17 06:36:46 [INFO] consul: member 'Node 15080' joined, marking health alive
2016/03/17 06:36:46 [DEBUG] raft: Node 127.0.0.1:15081 updated peer set (2): [127.0.0.1:15081 127.0.0.1:15077]
2016/03/17 06:36:46 [INFO] consul: shutting down server
2016/03/17 06:36:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:46 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/03/17 06:36:46 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/03/17 06:36:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:46 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:36:46 [ERR] raft: Failed to heartbeat to 127.0.0.1:15081: EOF
2016/03/17 06:36:46 [INFO] consul: shutting down server
2016/03/17 06:36:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:46 [ERR] raft: Failed to heartbeat to 127.0.0.1:15081: dial tcp 127.0.0.1:15081: getsockopt: connection refused
2016/03/17 06:36:46 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:36:46 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15081
2016/03/17 06:36:46 [ERR] raft: Failed to heartbeat to 127.0.0.1:15081: dial tcp 127.0.0.1:15081: getsockopt: connection refused
2016/03/17 06:36:46 [WARN] raft: Failed to contact 127.0.0.1:15081 in 31.206667ms
2016/03/17 06:36:46 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:36:46 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/03/17 06:36:46 [ERR] raft: Failed to heartbeat to 127.0.0.1:15081: dial tcp 127.0.0.1:15081: getsockopt: connection refused
2016/03/17 06:36:46 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:36:46 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15081: dial tcp 127.0.0.1:15081: getsockopt: connection refused
2016/03/17 06:36:46 [ERR] consul: failed to reconcile member: {Node 15080 127.0.0.1 15082 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15081 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:36:46 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:36:46 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:36:46 [INFO] memberlist: Marking Node 15080 as failed, suspect timeout reached
2016/03/17 06:36:46 [INFO] serf: EventMemberFailed: Node 15080 127.0.0.1
2016/03/17 06:36:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:46 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/03/17 06:36:46 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15081: dial tcp 127.0.0.1:15081: getsockopt: connection refused
2016/03/17 06:36:47 [DEBUG] raft: Votes needed: 2
--- PASS: TestACL_NonAuthority_Found (4.11s)
=== RUN   TestACL_NonAuthority_Management
2016/03/17 06:36:47 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/03/17 06:36:47 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/03/17 06:36:47 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/17 06:36:47 [INFO] serf: EventMemberJoin: Node 15084.dc1 127.0.0.1
2016/03/17 06:36:47 [INFO] consul: adding WAN server Node 15084.dc1 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/17 06:36:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:47 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/03/17 06:36:48 [INFO] raft: Node at 127.0.0.1:15089 [Follower] entering Follower state
2016/03/17 06:36:48 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/03/17 06:36:48 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/17 06:36:48 [INFO] serf: EventMemberJoin: Node 15088.dc1 127.0.0.1
2016/03/17 06:36:48 [INFO] consul: adding WAN server Node 15088.dc1 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/17 06:36:48 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15086
2016/03/17 06:36:48 [DEBUG] memberlist: TCP connection from=127.0.0.1:45513
2016/03/17 06:36:48 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/03/17 06:36:48 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/17 06:36:48 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/03/17 06:36:48 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] raft: Votes needed: 1
2016/03/17 06:36:48 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:36:48 [INFO] raft: Election won. Tally: 1
2016/03/17 06:36:48 [INFO] raft: Node at 127.0.0.1:15085 [Leader] entering Leader state
2016/03/17 06:36:48 [INFO] consul: cluster leadership acquired
2016/03/17 06:36:48 [INFO] consul: New leader elected: Node 15084
2016/03/17 06:36:48 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [INFO] consul: New leader elected: Node 15084
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] serf: messageJoinType: Node 15088
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:36:48 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15085]
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:36:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:36:49 [INFO] consul: member 'Node 15084' joined, marking health alive
2016/03/17 06:36:49 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15089 127.0.0.1:15085]
2016/03/17 06:36:49 [INFO] raft: Added peer 127.0.0.1:15089, starting replication
2016/03/17 06:36:49 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:60182
2016/03/17 06:36:49 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:60183
2016/03/17 06:36:49 [ERR] consul.acl: Failed to get policy for 'foobar': No cluster leader
2016/03/17 06:36:49 [INFO] consul: shutting down server
2016/03/17 06:36:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:49 [DEBUG] raft: Failed to contact 127.0.0.1:15089 in 164.835666ms
2016/03/17 06:36:49 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:36:49 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/03/17 06:36:49 [INFO] consul: cluster leadership lost
2016/03/17 06:36:49 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:36:49 [ERR] consul: failed to reconcile member: {Node 15088 127.0.0.1 15090 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15089 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:36:49 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:36:49 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:36:49 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/03/17 06:36:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:36:49 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/03/17 06:36:49 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/17 06:36:49 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:36:49 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15089: EOF
2016/03/17 06:36:49 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/03/17 06:36:49 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/03/17 06:36:49 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/03/17 06:36:49 [INFO] memberlist: Marking Node 15088 as failed, suspect timeout reached
2016/03/17 06:36:49 [INFO] serf: EventMemberFailed: Node 15088 127.0.0.1
2016/03/17 06:36:49 [INFO] consul: removing LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/03/17 06:36:49 [INFO] consul: shutting down server
2016/03/17 06:36:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:36:50 [DEBUG] raft: Votes needed: 2
2016/03/17 06:36:59 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15089: read tcp 127.0.0.1:60186->127.0.0.1:15089: i/o timeout
--- FAIL: TestACL_NonAuthority_Management (12.77s)
	acl_test.go:379: unexpected failed read
=== RUN   TestACL_DownPolicy_Deny
2016/03/17 06:37:00 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/03/17 06:37:00 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/03/17 06:37:00 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/17 06:37:00 [INFO] serf: EventMemberJoin: Node 15092.dc1 127.0.0.1
2016/03/17 06:37:00 [INFO] consul: adding WAN server Node 15092.dc1 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/17 06:37:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:00 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/17 06:37:01 [INFO] raft: Node at 127.0.0.1:15097 [Follower] entering Follower state
2016/03/17 06:37:01 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/03/17 06:37:01 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/17 06:37:01 [INFO] serf: EventMemberJoin: Node 15096.dc1 127.0.0.1
2016/03/17 06:37:01 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15094
2016/03/17 06:37:01 [DEBUG] memberlist: TCP connection from=127.0.0.1:45939
2016/03/17 06:37:01 [INFO] consul: adding WAN server Node 15096.dc1 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/17 06:37:01 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/03/17 06:37:01 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/17 06:37:01 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/03/17 06:37:01 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/03/17 06:37:01 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:37:01 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:01 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:01 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:01 [INFO] raft: Node at 127.0.0.1:15093 [Leader] entering Leader state
2016/03/17 06:37:01 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:01 [INFO] consul: New leader elected: Node 15092
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [INFO] consul: New leader elected: Node 15092
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] serf: messageJoinType: Node 15096
2016/03/17 06:37:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:01 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15093]
2016/03/17 06:37:01 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:01 [INFO] consul: member 'Node 15092' joined, marking health alive
2016/03/17 06:37:01 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15097 127.0.0.1:15093]
2016/03/17 06:37:01 [INFO] raft: Added peer 127.0.0.1:15097, starting replication
2016/03/17 06:37:01 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:48267
2016/03/17 06:37:01 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:48268
2016/03/17 06:37:02 [DEBUG] raft: Failed to contact 127.0.0.1:15097 in 272.506ms
2016/03/17 06:37:02 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:37:02 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/03/17 06:37:02 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:37:02 [INFO] consul: cluster leadership lost
2016/03/17 06:37:02 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/17 06:37:02 [ERR] consul: failed to reconcile member: {Node 15096 127.0.0.1 15098 map[build: port:15097 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:37:02 [INFO] consul: shutting down server
2016/03/17 06:37:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:02 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:37:02 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:37:02 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/17 06:37:02 [WARN] raft: AppendEntries to 127.0.0.1:15097 rejected, sending older logs (next: 1)
2016/03/17 06:37:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:02 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/17 06:37:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:02 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/03/17 06:37:02 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:37:02 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15097: EOF
2016/03/17 06:37:02 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/03/17 06:37:02 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/03/17 06:37:02 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:37:02 [ERR] raft: Failed to heartbeat to 127.0.0.1:15097: EOF
2016/03/17 06:37:02 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/03/17 06:37:02 [INFO] memberlist: Marking Node 15096 as failed, suspect timeout reached
2016/03/17 06:37:02 [INFO] serf: EventMemberFailed: Node 15096 127.0.0.1
2016/03/17 06:37:02 [INFO] consul: removing LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/03/17 06:37:02 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/03/17 06:37:02 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/03/17 06:37:02 [DEBUG] raft: Node 127.0.0.1:15097 updated peer set (2): [127.0.0.1:15093]
2016/03/17 06:37:02 [INFO] consul: shutting down server
2016/03/17 06:37:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:02 [DEBUG] raft: Votes needed: 2
2016/03/17 06:37:02 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:02 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:37:02 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/03/17 06:37:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15097: dial tcp 127.0.0.1:15097: getsockopt: connection refused
2016/03/17 06:37:03 [DEBUG] raft: Votes needed: 2
2016/03/17 06:37:12 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15097: read tcp 127.0.0.1:48271->127.0.0.1:15097: i/o timeout
--- FAIL: TestACL_DownPolicy_Deny (12.60s)
	acl_test.go:430: err: node is not the leader
=== RUN   TestACL_DownPolicy_Allow
2016/03/17 06:37:12 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/03/17 06:37:12 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/03/17 06:37:12 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/17 06:37:12 [INFO] serf: EventMemberJoin: Node 15100.dc1 127.0.0.1
2016/03/17 06:37:12 [INFO] consul: adding WAN server Node 15100.dc1 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/17 06:37:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:12 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/03/17 06:37:13 [INFO] raft: Node at 127.0.0.1:15105 [Follower] entering Follower state
2016/03/17 06:37:13 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/03/17 06:37:13 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/17 06:37:13 [INFO] serf: EventMemberJoin: Node 15104.dc1 127.0.0.1
2016/03/17 06:37:13 [INFO] consul: adding WAN server Node 15104.dc1 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/17 06:37:13 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15102
2016/03/17 06:37:13 [DEBUG] memberlist: TCP connection from=127.0.0.1:39119
2016/03/17 06:37:13 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/03/17 06:37:13 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/17 06:37:13 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/03/17 06:37:13 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/03/17 06:37:13 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:13 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:13 [INFO] raft: Node at 127.0.0.1:15101 [Leader] entering Leader state
2016/03/17 06:37:13 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:13 [INFO] consul: New leader elected: Node 15100
2016/03/17 06:37:13 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [INFO] consul: New leader elected: Node 15100
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:13 [DEBUG] serf: messageJoinType: Node 15104
2016/03/17 06:37:14 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15101]
2016/03/17 06:37:14 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:14 [INFO] consul: member 'Node 15100' joined, marking health alive
2016/03/17 06:37:14 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15105 127.0.0.1:15101]
2016/03/17 06:37:14 [INFO] raft: Added peer 127.0.0.1:15105, starting replication
2016/03/17 06:37:14 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:45700
2016/03/17 06:37:14 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:45701
2016/03/17 06:37:14 [DEBUG] raft: Failed to contact 127.0.0.1:15105 in 230.470333ms
2016/03/17 06:37:14 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:37:14 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:37:14 [INFO] consul: cluster leadership lost
2016/03/17 06:37:14 [ERR] consul: failed to reconcile member: {Node 15104 127.0.0.1 15106 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15105] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:37:14 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:37:14 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/03/17 06:37:14 [ERR] consul.acl: Apply failed: node is not the leader
2016/03/17 06:37:14 [INFO] consul: shutting down server
2016/03/17 06:37:14 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:14 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/17 06:37:14 [WARN] raft: AppendEntries to 127.0.0.1:15105 rejected, sending older logs (next: 1)
2016/03/17 06:37:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:14 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/03/17 06:37:14 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:15 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:37:15 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15105: EOF
2016/03/17 06:37:15 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/03/17 06:37:15 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/03/17 06:37:15 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/03/17 06:37:15 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:37:15 [ERR] raft: Failed to heartbeat to 127.0.0.1:15105: EOF
2016/03/17 06:37:15 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/03/17 06:37:15 [INFO] memberlist: Marking Node 15104 as failed, suspect timeout reached
2016/03/17 06:37:15 [INFO] serf: EventMemberFailed: Node 15104 127.0.0.1
2016/03/17 06:37:15 [INFO] consul: removing LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/03/17 06:37:15 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/03/17 06:37:15 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/03/17 06:37:15 [DEBUG] raft: Node 127.0.0.1:15105 updated peer set (2): [127.0.0.1:15101]
2016/03/17 06:37:15 [INFO] consul: shutting down server
2016/03/17 06:37:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:15 [DEBUG] raft: Votes needed: 2
2016/03/17 06:37:15 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:15 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:37:15 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/03/17 06:37:15 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15105: dial tcp 127.0.0.1:15105: getsockopt: connection refused
2016/03/17 06:37:16 [DEBUG] raft: Votes needed: 2
2016/03/17 06:37:25 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15105: read tcp 127.0.0.1:45702->127.0.0.1:15105: i/o timeout
2016/03/17 06:37:25 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15105: read tcp 127.0.0.1:45704->127.0.0.1:15105: i/o timeout
2016/03/17 06:37:25 [ERR] raft: Failed to heartbeat to 127.0.0.1:15105: read tcp 127.0.0.1:45706->127.0.0.1:15105: i/o timeout
--- FAIL: TestACL_DownPolicy_Allow (12.76s)
	acl_test.go:504: err: node is not the leader
=== RUN   TestACL_DownPolicy_ExtendCache
2016/03/17 06:37:25 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/03/17 06:37:25 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/03/17 06:37:25 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/17 06:37:25 [INFO] serf: EventMemberJoin: Node 15108.dc1 127.0.0.1
2016/03/17 06:37:25 [INFO] consul: adding WAN server Node 15108.dc1 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/17 06:37:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:25 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/03/17 06:37:26 [INFO] raft: Node at 127.0.0.1:15113 [Follower] entering Follower state
2016/03/17 06:37:26 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/03/17 06:37:26 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/17 06:37:26 [INFO] serf: EventMemberJoin: Node 15112.dc1 127.0.0.1
2016/03/17 06:37:26 [INFO] consul: adding WAN server Node 15112.dc1 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/17 06:37:26 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15110
2016/03/17 06:37:26 [DEBUG] memberlist: TCP connection from=127.0.0.1:40001
2016/03/17 06:37:26 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/03/17 06:37:26 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/03/17 06:37:26 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/03/17 06:37:26 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:26 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:26 [INFO] raft: Node at 127.0.0.1:15109 [Leader] entering Leader state
2016/03/17 06:37:26 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:26 [INFO] consul: New leader elected: Node 15108
2016/03/17 06:37:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [INFO] consul: New leader elected: Node 15108
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] serf: messageJoinType: Node 15112
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:26 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15109]
2016/03/17 06:37:26 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:27 [INFO] consul: member 'Node 15108' joined, marking health alive
2016/03/17 06:37:27 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15113 127.0.0.1:15109]
2016/03/17 06:37:27 [INFO] raft: Added peer 127.0.0.1:15113, starting replication
2016/03/17 06:37:27 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47515
2016/03/17 06:37:27 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47516
2016/03/17 06:37:27 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/03/17 06:37:27 [WARN] raft: AppendEntries to 127.0.0.1:15113 rejected, sending older logs (next: 1)
2016/03/17 06:37:28 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15109]
2016/03/17 06:37:28 [INFO] raft: pipelining replication to peer 127.0.0.1:15113
2016/03/17 06:37:28 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15113 127.0.0.1:15109]
2016/03/17 06:37:28 [INFO] consul: member 'Node 15112' joined, marking health alive
2016/03/17 06:37:28 [INFO] consul: shutting down server
2016/03/17 06:37:28 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:28 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15113 127.0.0.1:15109]
2016/03/17 06:37:28 [DEBUG] memberlist: Failed UDP ping: Node 15108 (timeout reached)
2016/03/17 06:37:28 [INFO] memberlist: Suspect Node 15108 has failed, no acks received
2016/03/17 06:37:28 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:28 [DEBUG] memberlist: Failed UDP ping: Node 15108 (timeout reached)
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47520
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47521
2016/03/17 06:37:28 [INFO] memberlist: Suspect Node 15108 has failed, no acks received
2016/03/17 06:37:28 [INFO] memberlist: Marking Node 15108 as failed, suspect timeout reached
2016/03/17 06:37:28 [INFO] serf: EventMemberFailed: Node 15108 127.0.0.1
2016/03/17 06:37:28 [INFO] consul: removing LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47522
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47523
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47524
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47525
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47526
2016/03/17 06:37:28 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:47527
2016/03/17 06:37:28 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:37:28 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15113
2016/03/17 06:37:28 [ERR] consul: failed to reconcile member: {Node 15112 127.0.0.1 15114 map[vsn_min:1 vsn_max:3 build: port:15113 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:37:28 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:37:28 [ERR] consul.acl: Failed to get policy for 'b7c4a59d-6956-0edb-7fb2-da452bcf8e7a': No cluster leader
2016/03/17 06:37:28 [INFO] consul: shutting down server
2016/03/17 06:37:28 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:28 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:28 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:37:28 [INFO] consul: shutting down server
--- PASS: TestACL_DownPolicy_ExtendCache (3.64s)
=== RUN   TestACL_MultiDC_Found
2016/03/17 06:37:29 [INFO] raft: Node at 127.0.0.1:15117 [Follower] entering Follower state
2016/03/17 06:37:29 [INFO] serf: EventMemberJoin: Node 15116 127.0.0.1
2016/03/17 06:37:29 [INFO] consul: adding LAN server Node 15116 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/17 06:37:29 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/03/17 06:37:29 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/17 06:37:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:29 [INFO] raft: Node at 127.0.0.1:15117 [Candidate] entering Candidate state
2016/03/17 06:37:30 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:30 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:30 [INFO] raft: Node at 127.0.0.1:15117 [Leader] entering Leader state
2016/03/17 06:37:30 [INFO] raft: Node at 127.0.0.1:15121 [Follower] entering Follower state
2016/03/17 06:37:30 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:30 [INFO] consul: New leader elected: Node 15116
2016/03/17 06:37:30 [INFO] serf: EventMemberJoin: Node 15120 127.0.0.1
2016/03/17 06:37:30 [INFO] consul: adding LAN server Node 15120 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/17 06:37:30 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/03/17 06:37:30 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15119
2016/03/17 06:37:30 [DEBUG] memberlist: TCP connection from=127.0.0.1:57726
2016/03/17 06:37:30 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/17 06:37:30 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/03/17 06:37:30 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/03/17 06:37:30 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/03/17 06:37:30 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:30 [INFO] raft: Node at 127.0.0.1:15121 [Candidate] entering Candidate state
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/03/17 06:37:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:30 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15117]
2016/03/17 06:37:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:31 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:31 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:31 [INFO] raft: Node at 127.0.0.1:15121 [Leader] entering Leader state
2016/03/17 06:37:31 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:31 [INFO] consul: New leader elected: Node 15120
2016/03/17 06:37:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:31 [INFO] consul: member 'Node 15116' joined, marking health alive
2016/03/17 06:37:31 [DEBUG] raft: Node 127.0.0.1:15121 updated peer set (2): [127.0.0.1:15121]
2016/03/17 06:37:31 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:31 [INFO] consul: member 'Node 15120' joined, marking health alive
2016/03/17 06:37:32 [INFO] consul: shutting down server
2016/03/17 06:37:32 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:32 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:32 [INFO] consul: shutting down server
2016/03/17 06:37:32 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:32 [DEBUG] memberlist: Failed UDP ping: Node 15120.dc2 (timeout reached)
2016/03/17 06:37:32 [INFO] memberlist: Suspect Node 15120.dc2 has failed, no acks received
2016/03/17 06:37:32 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:33 [INFO] memberlist: Marking Node 15120.dc2 as failed, suspect timeout reached
2016/03/17 06:37:33 [INFO] serf: EventMemberFailed: Node 15120.dc2 127.0.0.1
2016/03/17 06:37:33 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_MultiDC_Found (4.42s)
=== RUN   TestACL_filterHealthChecks
2016/03/17 06:37:33 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterHealthChecks (0.00s)
=== RUN   TestACL_filterServices
2016/03/17 06:37:33 [DEBUG] consul: dropping service "service1" from result due to ACLs
2016/03/17 06:37:33 [DEBUG] consul: dropping service "service2" from result due to ACLs
--- PASS: TestACL_filterServices (0.00s)
=== RUN   TestACL_filterServiceNodes
2016/03/17 06:37:33 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterServiceNodes (0.00s)
=== RUN   TestACL_filterNodeServices
2016/03/17 06:37:33 [DEBUG] consul: dropping service "foo" from result due to ACLs
--- PASS: TestACL_filterNodeServices (0.00s)
=== RUN   TestACL_filterCheckServiceNodes
2016/03/17 06:37:33 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterCheckServiceNodes (0.00s)
=== RUN   TestACL_filterNodeDump
2016/03/17 06:37:33 [DEBUG] consul: dropping service "foo" from result due to ACLs
2016/03/17 06:37:33 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterNodeDump (0.00s)
=== RUN   TestACL_unhandledFilterType
2016/03/17 06:37:33 [INFO] raft: Node at 127.0.0.1:15125 [Follower] entering Follower state
2016/03/17 06:37:33 [INFO] serf: EventMemberJoin: Node 15124 127.0.0.1
2016/03/17 06:37:33 [INFO] consul: adding LAN server Node 15124 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/03/17 06:37:33 [INFO] serf: EventMemberJoin: Node 15124.dc1 127.0.0.1
2016/03/17 06:37:33 [INFO] consul: adding WAN server Node 15124.dc1 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/03/17 06:37:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:33 [INFO] raft: Node at 127.0.0.1:15125 [Candidate] entering Candidate state
2016/03/17 06:37:34 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:34 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:34 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:34 [INFO] raft: Node at 127.0.0.1:15125 [Leader] entering Leader state
2016/03/17 06:37:34 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:34 [INFO] consul: New leader elected: Node 15124
2016/03/17 06:37:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:34 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15125]
2016/03/17 06:37:34 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:34 [INFO] consul: member 'Node 15124' joined, marking health alive
2016/03/17 06:37:36 [INFO] consul: shutting down server
2016/03/17 06:37:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:36 [WARN] serf: Shutdown without a Leave
--- PASS: TestACL_unhandledFilterType (3.05s)
=== RUN   TestCatalogRegister
2016/03/17 06:37:36 [INFO] raft: Node at 127.0.0.1:15129 [Follower] entering Follower state
2016/03/17 06:37:36 [INFO] serf: EventMemberJoin: Node 15128 127.0.0.1
2016/03/17 06:37:36 [INFO] consul: adding LAN server Node 15128 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/03/17 06:37:36 [INFO] serf: EventMemberJoin: Node 15128.dc1 127.0.0.1
2016/03/17 06:37:36 [INFO] consul: adding WAN server Node 15128.dc1 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/03/17 06:37:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:36 [INFO] raft: Node at 127.0.0.1:15129 [Candidate] entering Candidate state
2016/03/17 06:37:37 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:37 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:37 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:37 [INFO] raft: Node at 127.0.0.1:15129 [Leader] entering Leader state
2016/03/17 06:37:37 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:37 [INFO] consul: New leader elected: Node 15128
2016/03/17 06:37:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:37 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15129]
2016/03/17 06:37:37 [DEBUG] consul: reset tombstone GC to index 3
2016/03/17 06:37:37 [INFO] consul: member 'Node 15128' joined, marking health alive
2016/03/17 06:37:37 [INFO] consul: shutting down server
2016/03/17 06:37:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:37 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogRegister (1.57s)
=== RUN   TestCatalogRegister_ACLDeny
2016/03/17 06:37:38 [INFO] raft: Node at 127.0.0.1:15133 [Follower] entering Follower state
2016/03/17 06:37:38 [INFO] serf: EventMemberJoin: Node 15132 127.0.0.1
2016/03/17 06:37:38 [INFO] consul: adding LAN server Node 15132 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/03/17 06:37:38 [INFO] serf: EventMemberJoin: Node 15132.dc1 127.0.0.1
2016/03/17 06:37:38 [INFO] consul: adding WAN server Node 15132.dc1 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/03/17 06:37:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:38 [INFO] raft: Node at 127.0.0.1:15133 [Candidate] entering Candidate state
2016/03/17 06:37:38 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:38 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:38 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:38 [INFO] raft: Node at 127.0.0.1:15133 [Leader] entering Leader state
2016/03/17 06:37:38 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:38 [INFO] consul: New leader elected: Node 15132
2016/03/17 06:37:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:39 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15133]
2016/03/17 06:37:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:39 [INFO] consul: member 'Node 15132' joined, marking health alive
2016/03/17 06:37:40 [WARN] consul.catalog: Register of service 'db' on 'foo' denied due to ACLs
2016/03/17 06:37:40 [INFO] consul: shutting down server
2016/03/17 06:37:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:37:41 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ACLDeny (3.16s)
=== RUN   TestCatalogRegister_ForwardLeader
2016/03/17 06:37:41 [INFO] raft: Node at 127.0.0.1:15137 [Follower] entering Follower state
2016/03/17 06:37:41 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/03/17 06:37:41 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/17 06:37:41 [INFO] serf: EventMemberJoin: Node 15136.dc1 127.0.0.1
2016/03/17 06:37:41 [INFO] consul: adding WAN server Node 15136.dc1 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/17 06:37:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:41 [INFO] raft: Node at 127.0.0.1:15137 [Candidate] entering Candidate state
2016/03/17 06:37:42 [INFO] raft: Node at 127.0.0.1:15141 [Follower] entering Follower state
2016/03/17 06:37:42 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/03/17 06:37:42 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/17 06:37:42 [INFO] serf: EventMemberJoin: Node 15140.dc1 127.0.0.1
2016/03/17 06:37:42 [INFO] consul: adding WAN server Node 15140.dc1 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/17 06:37:42 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15138
2016/03/17 06:37:42 [DEBUG] memberlist: TCP connection from=127.0.0.1:41167
2016/03/17 06:37:42 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/03/17 06:37:42 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/03/17 06:37:42 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/03/17 06:37:42 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/03/17 06:37:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:42 [INFO] raft: Node at 127.0.0.1:15137 [Leader] entering Leader state
2016/03/17 06:37:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:42 [INFO] consul: New leader elected: Node 15136
2016/03/17 06:37:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:42 [INFO] raft: Node at 127.0.0.1:15141 [Candidate] entering Candidate state
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [INFO] consul: New leader elected: Node 15136
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageJoinType: Node 15140
2016/03/17 06:37:42 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:42 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15137]
2016/03/17 06:37:42 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:42 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/03/17 06:37:43 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:43 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/03/17 06:37:43 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:43 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:43 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:43 [INFO] raft: Node at 127.0.0.1:15141 [Leader] entering Leader state
2016/03/17 06:37:43 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:43 [INFO] consul: New leader elected: Node 15140
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [INFO] consul: New leader elected: Node 15140
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:37:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:43 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:43 [DEBUG] raft: Node 127.0.0.1:15141 updated peer set (2): [127.0.0.1:15141]
2016/03/17 06:37:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:43 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/03/17 06:37:44 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:44 [ERR] consul: 'Node 15136' and 'Node 15140' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:44 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/03/17 06:37:44 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:44 [INFO] consul: shutting down server
2016/03/17 06:37:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:44 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:45 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:45 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/03/17 06:37:45 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/03/17 06:37:45 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:37:45 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/03/17 06:37:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:37:45 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:37:45 [INFO] consul: shutting down server
2016/03/17 06:37:45 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:45 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/03/17 06:37:45 [INFO] memberlist: Marking Node 15140 as failed, suspect timeout reached
2016/03/17 06:37:45 [INFO] serf: EventMemberFailed: Node 15140 127.0.0.1
2016/03/17 06:37:45 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:45 [INFO] consul: member 'Node 15140' failed, marking health critical
2016/03/17 06:37:45 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:37:45 [ERR] consul: failed to reconcile member: {Node 15140 127.0.0.1 15142 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15141 bootstrap:1] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:37:45 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:37:45 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ForwardLeader (4.76s)
=== RUN   TestCatalogRegister_ForwardDC
2016/03/17 06:37:46 [INFO] raft: Node at 127.0.0.1:15145 [Follower] entering Follower state
2016/03/17 06:37:46 [INFO] serf: EventMemberJoin: Node 15144 127.0.0.1
2016/03/17 06:37:46 [INFO] consul: adding LAN server Node 15144 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/17 06:37:46 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/03/17 06:37:46 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/17 06:37:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:46 [INFO] raft: Node at 127.0.0.1:15145 [Candidate] entering Candidate state
2016/03/17 06:37:47 [INFO] raft: Node at 127.0.0.1:15149 [Follower] entering Follower state
2016/03/17 06:37:47 [INFO] serf: EventMemberJoin: Node 15148 127.0.0.1
2016/03/17 06:37:47 [INFO] consul: adding LAN server Node 15148 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/17 06:37:47 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/03/17 06:37:47 [DEBUG] memberlist: TCP connection from=127.0.0.1:52846
2016/03/17 06:37:47 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15147
2016/03/17 06:37:47 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/17 06:37:47 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/03/17 06:37:47 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/03/17 06:37:47 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/03/17 06:37:47 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:47 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:47 [INFO] raft: Node at 127.0.0.1:15145 [Leader] entering Leader state
2016/03/17 06:37:47 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:47 [INFO] consul: New leader elected: Node 15144
2016/03/17 06:37:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:47 [INFO] raft: Node at 127.0.0.1:15149 [Candidate] entering Candidate state
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/03/17 06:37:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:47 [DEBUG] raft: Node 127.0.0.1:15145 updated peer set (2): [127.0.0.1:15145]
2016/03/17 06:37:47 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:47 [INFO] consul: member 'Node 15144' joined, marking health alive
2016/03/17 06:37:47 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:47 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:47 [INFO] raft: Node at 127.0.0.1:15149 [Leader] entering Leader state
2016/03/17 06:37:47 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:47 [INFO] consul: New leader elected: Node 15148
2016/03/17 06:37:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:48 [DEBUG] raft: Node 127.0.0.1:15149 updated peer set (2): [127.0.0.1:15149]
2016/03/17 06:37:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:48 [INFO] consul: member 'Node 15148' joined, marking health alive
2016/03/17 06:37:49 [INFO] consul: shutting down server
2016/03/17 06:37:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:49 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:37:49 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:37:49 [INFO] consul: shutting down server
2016/03/17 06:37:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:49 [DEBUG] memberlist: Failed UDP ping: Node 15148.dc2 (timeout reached)
2016/03/17 06:37:49 [INFO] memberlist: Suspect Node 15148.dc2 has failed, no acks received
2016/03/17 06:37:49 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogRegister_ForwardDC (4.16s)
=== RUN   TestCatalogDeregister
2016/03/17 06:37:50 [INFO] memberlist: Marking Node 15148.dc2 as failed, suspect timeout reached
2016/03/17 06:37:50 [INFO] serf: EventMemberFailed: Node 15148.dc2 127.0.0.1
2016/03/17 06:37:50 [INFO] raft: Node at 127.0.0.1:15153 [Follower] entering Follower state
2016/03/17 06:37:50 [INFO] serf: EventMemberJoin: Node 15152 127.0.0.1
2016/03/17 06:37:50 [INFO] consul: adding LAN server Node 15152 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/03/17 06:37:50 [INFO] serf: EventMemberJoin: Node 15152.dc1 127.0.0.1
2016/03/17 06:37:50 [INFO] consul: adding WAN server Node 15152.dc1 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/03/17 06:37:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:50 [INFO] raft: Node at 127.0.0.1:15153 [Candidate] entering Candidate state
2016/03/17 06:37:51 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:51 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:51 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:51 [INFO] raft: Node at 127.0.0.1:15153 [Leader] entering Leader state
2016/03/17 06:37:51 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:51 [INFO] consul: New leader elected: Node 15152
2016/03/17 06:37:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:51 [DEBUG] raft: Node 127.0.0.1:15153 updated peer set (2): [127.0.0.1:15153]
2016/03/17 06:37:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:51 [INFO] consul: member 'Node 15152' joined, marking health alive
2016/03/17 06:37:51 [INFO] consul: shutting down server
2016/03/17 06:37:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:51 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogDeregister (2.02s)
=== RUN   TestCatalogListDatacenters
2016/03/17 06:37:52 [INFO] raft: Node at 127.0.0.1:15157 [Follower] entering Follower state
2016/03/17 06:37:52 [INFO] serf: EventMemberJoin: Node 15156 127.0.0.1
2016/03/17 06:37:52 [INFO] consul: adding LAN server Node 15156 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/17 06:37:52 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/03/17 06:37:52 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/17 06:37:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:52 [INFO] raft: Node at 127.0.0.1:15157 [Candidate] entering Candidate state
2016/03/17 06:37:53 [INFO] serf: EventMemberJoin: Node 15160 127.0.0.1
2016/03/17 06:37:53 [INFO] raft: Node at 127.0.0.1:15161 [Follower] entering Follower state
2016/03/17 06:37:53 [INFO] consul: adding LAN server Node 15160 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/17 06:37:53 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/03/17 06:37:53 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/17 06:37:53 [DEBUG] memberlist: TCP connection from=127.0.0.1:41026
2016/03/17 06:37:53 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15159
2016/03/17 06:37:53 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/03/17 06:37:53 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/03/17 06:37:53 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/03/17 06:37:53 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:53 [INFO] raft: Node at 127.0.0.1:15161 [Candidate] entering Candidate state
2016/03/17 06:37:53 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:53 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:53 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:53 [INFO] raft: Node at 127.0.0.1:15157 [Leader] entering Leader state
2016/03/17 06:37:53 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:53 [INFO] consul: New leader elected: Node 15156
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:53 [DEBUG] raft: Node 127.0.0.1:15157 updated peer set (2): [127.0.0.1:15157]
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/03/17 06:37:53 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:53 [INFO] consul: member 'Node 15156' joined, marking health alive
2016/03/17 06:37:54 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:54 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:54 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:54 [INFO] raft: Node at 127.0.0.1:15161 [Leader] entering Leader state
2016/03/17 06:37:54 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:54 [INFO] consul: New leader elected: Node 15160
2016/03/17 06:37:54 [INFO] consul: shutting down server
2016/03/17 06:37:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:54 [DEBUG] raft: Node 127.0.0.1:15161 updated peer set (2): [127.0.0.1:15161]
2016/03/17 06:37:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:54 [ERR] memberlist: Failed to send ack: write udp 127.0.0.1:15163->127.0.0.1:15159: use of closed network connection from=127.0.0.1:15159
2016/03/17 06:37:54 [DEBUG] memberlist: Failed UDP ping: Node 15160.dc2 (timeout reached)
2016/03/17 06:37:54 [INFO] memberlist: Suspect Node 15160.dc2 has failed, no acks received
2016/03/17 06:37:54 [DEBUG] memberlist: Failed UDP ping: Node 15160.dc2 (timeout reached)
2016/03/17 06:37:54 [INFO] memberlist: Suspect Node 15160.dc2 has failed, no acks received
2016/03/17 06:37:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:37:54 [INFO] consul: shutting down server
2016/03/17 06:37:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:54 [INFO] memberlist: Marking Node 15160.dc2 as failed, suspect timeout reached
2016/03/17 06:37:54 [INFO] serf: EventMemberFailed: Node 15160.dc2 127.0.0.1
2016/03/17 06:37:54 [DEBUG] memberlist: Failed UDP ping: Node 15160.dc2 (timeout reached)
2016/03/17 06:37:54 [INFO] memberlist: Suspect Node 15160.dc2 has failed, no acks received
2016/03/17 06:37:54 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListDatacenters (2.93s)
=== RUN   TestCatalogListDatacenters_DistanceSort
2016/03/17 06:37:55 [INFO] raft: Node at 127.0.0.1:15165 [Follower] entering Follower state
2016/03/17 06:37:55 [INFO] serf: EventMemberJoin: Node 15164 127.0.0.1
2016/03/17 06:37:55 [INFO] consul: adding LAN server Node 15164 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/17 06:37:55 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/17 06:37:55 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/17 06:37:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:55 [INFO] raft: Node at 127.0.0.1:15165 [Candidate] entering Candidate state
2016/03/17 06:37:56 [INFO] raft: Node at 127.0.0.1:15169 [Follower] entering Follower state
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15168 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding LAN server Node 15168 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/17 06:37:56 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:56 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:56 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:56 [INFO] raft: Node at 127.0.0.1:15165 [Leader] entering Leader state
2016/03/17 06:37:56 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:56 [INFO] consul: New leader elected: Node 15164
2016/03/17 06:37:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:56 [INFO] raft: Node at 127.0.0.1:15169 [Candidate] entering Candidate state
2016/03/17 06:37:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:56 [DEBUG] raft: Node 127.0.0.1:15165 updated peer set (2): [127.0.0.1:15165]
2016/03/17 06:37:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:56 [INFO] consul: member 'Node 15164' joined, marking health alive
2016/03/17 06:37:56 [INFO] raft: Node at 127.0.0.1:15173 [Follower] entering Follower state
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15172 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding LAN server Node 15172 (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/17 06:37:56 [DEBUG] memberlist: TCP connection from=127.0.0.1:52299
2016/03/17 06:37:56 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/17 06:37:56 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/03/17 06:37:56 [DEBUG] memberlist: TCP connection from=127.0.0.1:52300
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/03/17 06:37:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:56 [INFO] raft: Node at 127.0.0.1:15173 [Candidate] entering Candidate state
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/03/17 06:37:56 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:56 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:57 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:57 [INFO] raft: Node at 127.0.0.1:15169 [Leader] entering Leader state
2016/03/17 06:37:57 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:57 [INFO] consul: New leader elected: Node 15168
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/03/17 06:37:57 [INFO] consul: shutting down server
2016/03/17 06:37:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:57 [DEBUG] raft: Node 127.0.0.1:15169 updated peer set (2): [127.0.0.1:15169]
2016/03/17 06:37:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:57 [INFO] consul: member 'Node 15168' joined, marking health alive
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/17 06:37:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/17 06:37:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/17 06:37:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:57 [INFO] consul: shutting down server
2016/03/17 06:37:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/17 06:37:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/17 06:37:57 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/03/17 06:37:57 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/03/17 06:37:57 [INFO] consul: removing WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/03/17 06:37:57 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/03/17 06:37:57 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/03/17 06:37:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:57 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/03/17 06:37:57 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
2016/03/17 06:37:57 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:37:57 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:37:57 [INFO] consul: shutting down server
2016/03/17 06:37:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:57 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/03/17 06:37:58 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
2016/03/17 06:37:58 [INFO] memberlist: Marking Node 15168.dc2 as failed, suspect timeout reached
2016/03/17 06:37:58 [INFO] serf: EventMemberFailed: Node 15168.dc2 127.0.0.1
2016/03/17 06:37:58 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:58 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/03/17 06:37:58 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
--- PASS: TestCatalogListDatacenters_DistanceSort (3.31s)
=== RUN   TestCatalogListNodes
2016/03/17 06:37:58 [INFO] raft: Node at 127.0.0.1:15177 [Follower] entering Follower state
2016/03/17 06:37:58 [INFO] serf: EventMemberJoin: Node 15176 127.0.0.1
2016/03/17 06:37:58 [INFO] consul: adding LAN server Node 15176 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/03/17 06:37:58 [INFO] serf: EventMemberJoin: Node 15176.dc1 127.0.0.1
2016/03/17 06:37:58 [INFO] consul: adding WAN server Node 15176.dc1 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/03/17 06:37:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:37:58 [INFO] raft: Node at 127.0.0.1:15177 [Candidate] entering Candidate state
2016/03/17 06:37:59 [DEBUG] raft: Votes needed: 1
2016/03/17 06:37:59 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:37:59 [INFO] raft: Election won. Tally: 1
2016/03/17 06:37:59 [INFO] raft: Node at 127.0.0.1:15177 [Leader] entering Leader state
2016/03/17 06:37:59 [INFO] consul: cluster leadership acquired
2016/03/17 06:37:59 [INFO] consul: New leader elected: Node 15176
2016/03/17 06:37:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:37:59 [DEBUG] raft: Node 127.0.0.1:15177 updated peer set (2): [127.0.0.1:15177]
2016/03/17 06:37:59 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:37:59 [INFO] consul: member 'Node 15176' joined, marking health alive
2016/03/17 06:37:59 [INFO] consul: shutting down server
2016/03/17 06:37:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:37:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:00 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListNodes (1.84s)
=== RUN   TestCatalogListNodes_StaleRaad
2016/03/17 06:38:00 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15180.dc1 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding WAN server Node 15180.dc1 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/17 06:38:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:01 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:01 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15184.dc1 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding WAN server Node 15184.dc1 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/17 06:38:01 [DEBUG] memberlist: TCP connection from=127.0.0.1:55393
2016/03/17 06:38:01 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15182
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/03/17 06:38:01 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/03/17 06:38:01 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/17 06:38:01 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:01 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:01 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:01 [INFO] raft: Node at 127.0.0.1:15181 [Leader] entering Leader state
2016/03/17 06:38:01 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:01 [INFO] consul: New leader elected: Node 15180
2016/03/17 06:38:01 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:38:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:01 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:01 [INFO] consul: New leader elected: Node 15180
2016/03/17 06:38:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:01 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:01 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:01 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:01 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:01 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15181]
2016/03/17 06:38:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:02 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:02 [DEBUG] serf: messageJoinType: Node 15184
2016/03/17 06:38:02 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:02 [INFO] consul: member 'Node 15180' joined, marking health alive
2016/03/17 06:38:02 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15185 127.0.0.1:15181]
2016/03/17 06:38:02 [INFO] raft: Added peer 127.0.0.1:15185, starting replication
2016/03/17 06:38:02 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55129
2016/03/17 06:38:02 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55130
2016/03/17 06:38:02 [DEBUG] raft: Failed to contact 127.0.0.1:15185 in 213.929667ms
2016/03/17 06:38:02 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:38:02 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/03/17 06:38:02 [INFO] consul: cluster leadership lost
2016/03/17 06:38:02 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:38:02 [ERR] consul: failed to reconcile member: {Node 15184 127.0.0.1 15186 map[vsn_min:1 vsn_max:3 build: port:15185 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:38:02 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:38:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:02 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:02 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:38:02 [WARN] raft: AppendEntries to 127.0.0.1:15185 rejected, sending older logs (next: 1)
2016/03/17 06:38:02 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55132
2016/03/17 06:38:03 [DEBUG] raft: Node 127.0.0.1:15185 updated peer set (2): [127.0.0.1:15181]
2016/03/17 06:38:03 [WARN] raft: Rejecting vote from 127.0.0.1:15181 since we have a leader: 127.0.0.1:15181
2016/03/17 06:38:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15185
2016/03/17 06:38:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15185
2016/03/17 06:38:03 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:03 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:03 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:03 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15185
2016/03/17 06:38:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15185
2016/03/17 06:38:03 [WARN] raft: Rejecting vote from 127.0.0.1:15181 since we have a leader: 127.0.0.1:15181
2016/03/17 06:38:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:03 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:03 [DEBUG] raft-net: 127.0.0.1:15181 accepted connection from: 127.0.0.1:52017
2016/03/17 06:38:04 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:04 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:04 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:04 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:04 [DEBUG] raft: Newer term discovered, fallback to follower
2016/03/17 06:38:04 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/17 06:38:05 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:05 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:05 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:05 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:05 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:55136
2016/03/17 06:38:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:05 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:05 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:05 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:05 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:38:06 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:06 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:06 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:06 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:38:06 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:06 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:06 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:06 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:07 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/17 06:38:07 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:07 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:07 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:07 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:07 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:08 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:08 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:38:08 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:08 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:08 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:08 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:08 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:38:08 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:08 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:08 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:09 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:38:09 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:09 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:09 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:38:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:10 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:10 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:10 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/03/17 06:38:10 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:10 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:10 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:10 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:11 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:11 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:11 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:38:11 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:11 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:11 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:11 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:11 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:38:11 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:11 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:11 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:12 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:12 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:38:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:12 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:12 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:12 [INFO] consul: shutting down server
2016/03/17 06:38:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:12 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:12 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:38:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:12 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:12 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/03/17 06:38:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:12 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/03/17 06:38:13 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/03/17 06:38:13 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:13 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/03/17 06:38:13 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/03/17 06:38:13 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/03/17 06:38:13 [INFO] memberlist: Marking Node 15184 as failed, suspect timeout reached
2016/03/17 06:38:13 [INFO] serf: EventMemberFailed: Node 15184 127.0.0.1
2016/03/17 06:38:13 [INFO] consul: removing LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/03/17 06:38:13 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:13 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:13 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/17 06:38:13 [INFO] consul: shutting down server
2016/03/17 06:38:13 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:13 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:13 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/03/17 06:38:13 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:13 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:13 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/03/17 06:38:14 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_StaleRaad (14.13s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_ConsistentRead_Fail
2016/03/17 06:38:14 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/03/17 06:38:14 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/03/17 06:38:14 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/17 06:38:14 [INFO] serf: EventMemberJoin: Node 15188.dc1 127.0.0.1
2016/03/17 06:38:14 [INFO] consul: adding WAN server Node 15188.dc1 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/17 06:38:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:14 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:15 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/03/17 06:38:15 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/03/17 06:38:15 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/17 06:38:15 [INFO] serf: EventMemberJoin: Node 15192.dc1 127.0.0.1
2016/03/17 06:38:15 [INFO] consul: adding WAN server Node 15192.dc1 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/17 06:38:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15190
2016/03/17 06:38:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:40201
2016/03/17 06:38:15 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/03/17 06:38:15 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/03/17 06:38:15 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/17 06:38:15 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/03/17 06:38:15 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:15 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:15 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:15 [INFO] raft: Node at 127.0.0.1:15189 [Leader] entering Leader state
2016/03/17 06:38:15 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:15 [INFO] consul: New leader elected: Node 15188
2016/03/17 06:38:15 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [INFO] consul: New leader elected: Node 15188
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageJoinType: Node 15192
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:15 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15189]
2016/03/17 06:38:15 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:15 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:15 [INFO] consul: member 'Node 15188' joined, marking health alive
2016/03/17 06:38:15 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15193 127.0.0.1:15189]
2016/03/17 06:38:15 [INFO] raft: Added peer 127.0.0.1:15193, starting replication
2016/03/17 06:38:15 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:48753
2016/03/17 06:38:15 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:48754
2016/03/17 06:38:16 [DEBUG] raft: Failed to contact 127.0.0.1:15193 in 230.669334ms
2016/03/17 06:38:16 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:38:16 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:38:16 [INFO] consul: cluster leadership lost
2016/03/17 06:38:16 [ERR] consul: failed to reconcile member: {Node 15192 127.0.0.1 15194 map[port:15193 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:38:16 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/03/17 06:38:16 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:38:16 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:38:16 [WARN] raft: AppendEntries to 127.0.0.1:15193 rejected, sending older logs (next: 1)
2016/03/17 06:38:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:16 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:16 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:48756
2016/03/17 06:38:16 [DEBUG] raft: Node 127.0.0.1:15193 updated peer set (2): [127.0.0.1:15189]
2016/03/17 06:38:16 [WARN] raft: Rejecting vote from 127.0.0.1:15189 since we have a leader: 127.0.0.1:15189
2016/03/17 06:38:16 [INFO] raft: pipelining replication to peer 127.0.0.1:15193
2016/03/17 06:38:16 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15193
2016/03/17 06:38:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:16 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:17 [DEBUG] raft-net: 127.0.0.1:15189 accepted connection from: 127.0.0.1:39106
2016/03/17 06:38:17 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/17 06:38:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:17 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:18 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:18 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:18 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/03/17 06:38:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:18 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:18 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:19 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:19 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:19 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:38:19 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:19 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:19 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:20 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:20 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:20 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:38:20 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:20 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:20 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:20 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:20 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/03/17 06:38:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:20 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:21 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:21 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:38:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:21 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:21 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:21 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:21 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:38:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:21 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:21 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:22 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:22 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:38:22 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:22 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:22 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:22 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:22 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:38:22 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:22 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:22 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:22 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:22 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:22 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:38:23 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:23 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:23 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:23 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:38:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:23 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:23 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:23 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:23 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:38:23 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:23 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:23 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:23 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:38:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:24 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:24 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:24 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:38:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:24 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:24 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:24 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:38:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:24 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:24 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:25 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:25 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:38:25 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:25 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:25 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:25 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:25 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:25 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:38:25 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:25 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:26 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:26 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:38:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:26 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:26 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:26 [INFO] consul: shutting down server
2016/03/17 06:38:26 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:26 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:26 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:38:26 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:26 [INFO] raft: Node at 127.0.0.1:15193 [Candidate] entering Candidate state
2016/03/17 06:38:26 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/03/17 06:38:26 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:26 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/03/17 06:38:26 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:26 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15193: EOF
2016/03/17 06:38:26 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/03/17 06:38:26 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/03/17 06:38:26 [INFO] memberlist: Marking Node 15192 as failed, suspect timeout reached
2016/03/17 06:38:26 [INFO] serf: EventMemberFailed: Node 15192 127.0.0.1
2016/03/17 06:38:26 [INFO] consul: removing LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/03/17 06:38:26 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/03/17 06:38:26 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/03/17 06:38:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:27 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/17 06:38:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:27 [INFO] consul: shutting down server
2016/03/17 06:38:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:38:27 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/03/17 06:38:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:27 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15193: EOF
2016/03/17 06:38:27 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_ConsistentRead_Fail (13.45s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_ConsistentRead
2016/03/17 06:38:28 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/03/17 06:38:28 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/03/17 06:38:28 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/17 06:38:28 [INFO] serf: EventMemberJoin: Node 15196.dc1 127.0.0.1
2016/03/17 06:38:28 [INFO] consul: adding WAN server Node 15196.dc1 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/17 06:38:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:28 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/17 06:38:29 [INFO] raft: Node at 127.0.0.1:15201 [Follower] entering Follower state
2016/03/17 06:38:29 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/03/17 06:38:29 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/17 06:38:29 [INFO] serf: EventMemberJoin: Node 15200.dc1 127.0.0.1
2016/03/17 06:38:29 [INFO] consul: adding WAN server Node 15200.dc1 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/17 06:38:29 [DEBUG] memberlist: TCP connection from=127.0.0.1:41375
2016/03/17 06:38:29 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15198
2016/03/17 06:38:29 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/03/17 06:38:29 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/03/17 06:38:29 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/03/17 06:38:29 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/03/17 06:38:29 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:29 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:29 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:29 [INFO] raft: Node at 127.0.0.1:15197 [Leader] entering Leader state
2016/03/17 06:38:29 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:29 [INFO] consul: New leader elected: Node 15196
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [INFO] consul: New leader elected: Node 15196
2016/03/17 06:38:29 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:38:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:29 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15197]
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] serf: messageJoinType: Node 15200
2016/03/17 06:38:29 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:29 [INFO] consul: member 'Node 15196' joined, marking health alive
2016/03/17 06:38:29 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15201 127.0.0.1:15197]
2016/03/17 06:38:29 [INFO] raft: Added peer 127.0.0.1:15201, starting replication
2016/03/17 06:38:29 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:51982
2016/03/17 06:38:29 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:51983
2016/03/17 06:38:29 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:38:29 [WARN] raft: AppendEntries to 127.0.0.1:15201 rejected, sending older logs (next: 1)
2016/03/17 06:38:29 [DEBUG] raft: Failed to contact 127.0.0.1:15201 in 74.453ms
2016/03/17 06:38:29 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:38:29 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/03/17 06:38:29 [INFO] consul: cluster leadership lost
2016/03/17 06:38:29 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:38:29 [INFO] consul: shutting down server
2016/03/17 06:38:29 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:29 [ERR] consul: failed to reconcile member: {Node 15200 127.0.0.1 15202 map[build: port:15201 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:38:29 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:38:29 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:38:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:30 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/03/17 06:38:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:30 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:30 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15201: EOF
2016/03/17 06:38:30 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:38:30 [ERR] raft: Failed to heartbeat to 127.0.0.1:15201: EOF
2016/03/17 06:38:30 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/03/17 06:38:30 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/03/17 06:38:30 [DEBUG] raft: Node 127.0.0.1:15201 updated peer set (2): [127.0.0.1:15197]
2016/03/17 06:38:30 [INFO] consul: shutting down server
2016/03/17 06:38:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:30 [INFO] memberlist: Marking Node 15200 as failed, suspect timeout reached
2016/03/17 06:38:30 [INFO] serf: EventMemberFailed: Node 15200 127.0.0.1
2016/03/17 06:38:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:38:40 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15201: read tcp 127.0.0.1:51985->127.0.0.1:15201: i/o timeout
2016/03/17 06:38:40 [ERR] raft: Failed to heartbeat to 127.0.0.1:15201: read tcp 127.0.0.1:51986->127.0.0.1:15201: i/o timeout
2016/03/17 06:38:40 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15201: read tcp 127.0.0.1:51988->127.0.0.1:15201: i/o timeout
--- FAIL: TestCatalogListNodes_ConsistentRead (12.62s)
	catalog_endpoint_test.go:496: err: leadership lost while committing log
=== RUN   TestCatalogListNodes_DistanceSort
2016/03/17 06:38:40 [INFO] raft: Node at 127.0.0.1:15205 [Follower] entering Follower state
2016/03/17 06:38:40 [INFO] serf: EventMemberJoin: Node 15204 127.0.0.1
2016/03/17 06:38:40 [INFO] consul: adding LAN server Node 15204 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/03/17 06:38:40 [INFO] serf: EventMemberJoin: Node 15204.dc1 127.0.0.1
2016/03/17 06:38:40 [INFO] consul: adding WAN server Node 15204.dc1 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/03/17 06:38:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:40 [INFO] raft: Node at 127.0.0.1:15205 [Candidate] entering Candidate state
2016/03/17 06:38:41 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:41 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:41 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:41 [INFO] raft: Node at 127.0.0.1:15205 [Leader] entering Leader state
2016/03/17 06:38:41 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:41 [INFO] consul: New leader elected: Node 15204
2016/03/17 06:38:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:41 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15205]
2016/03/17 06:38:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:41 [INFO] consul: member 'Node 15204' joined, marking health alive
2016/03/17 06:38:41 [INFO] consul: shutting down server
2016/03/17 06:38:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:41 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListNodes_DistanceSort (1.66s)
=== RUN   TestCatalogListServices
2016/03/17 06:38:42 [INFO] raft: Node at 127.0.0.1:15209 [Follower] entering Follower state
2016/03/17 06:38:42 [INFO] serf: EventMemberJoin: Node 15208 127.0.0.1
2016/03/17 06:38:42 [INFO] consul: adding LAN server Node 15208 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/03/17 06:38:42 [INFO] serf: EventMemberJoin: Node 15208.dc1 127.0.0.1
2016/03/17 06:38:42 [INFO] consul: adding WAN server Node 15208.dc1 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/03/17 06:38:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:42 [INFO] raft: Node at 127.0.0.1:15209 [Candidate] entering Candidate state
2016/03/17 06:38:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:42 [INFO] raft: Node at 127.0.0.1:15209 [Leader] entering Leader state
2016/03/17 06:38:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:42 [INFO] consul: New leader elected: Node 15208
2016/03/17 06:38:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:43 [DEBUG] raft: Node 127.0.0.1:15209 updated peer set (2): [127.0.0.1:15209]
2016/03/17 06:38:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:43 [INFO] consul: member 'Node 15208' joined, marking health alive
2016/03/17 06:38:43 [INFO] consul: shutting down server
2016/03/17 06:38:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:43 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:38:43 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServices (1.90s)
=== RUN   TestCatalogListServices_Blocking
2016/03/17 06:38:44 [INFO] raft: Node at 127.0.0.1:15213 [Follower] entering Follower state
2016/03/17 06:38:44 [INFO] serf: EventMemberJoin: Node 15212 127.0.0.1
2016/03/17 06:38:44 [INFO] consul: adding LAN server Node 15212 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/03/17 06:38:44 [INFO] serf: EventMemberJoin: Node 15212.dc1 127.0.0.1
2016/03/17 06:38:44 [INFO] consul: adding WAN server Node 15212.dc1 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/03/17 06:38:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:44 [INFO] raft: Node at 127.0.0.1:15213 [Candidate] entering Candidate state
2016/03/17 06:38:44 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:44 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:44 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:44 [INFO] raft: Node at 127.0.0.1:15213 [Leader] entering Leader state
2016/03/17 06:38:44 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:44 [INFO] consul: New leader elected: Node 15212
2016/03/17 06:38:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:45 [DEBUG] raft: Node 127.0.0.1:15213 updated peer set (2): [127.0.0.1:15213]
2016/03/17 06:38:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:45 [INFO] consul: member 'Node 15212' joined, marking health alive
2016/03/17 06:38:45 [INFO] consul: shutting down server
2016/03/17 06:38:45 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:45 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListServices_Blocking (1.82s)
=== RUN   TestCatalogListServices_Timeout
2016/03/17 06:38:46 [INFO] raft: Node at 127.0.0.1:15217 [Follower] entering Follower state
2016/03/17 06:38:46 [INFO] serf: EventMemberJoin: Node 15216 127.0.0.1
2016/03/17 06:38:46 [INFO] consul: adding LAN server Node 15216 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/03/17 06:38:46 [INFO] serf: EventMemberJoin: Node 15216.dc1 127.0.0.1
2016/03/17 06:38:46 [INFO] consul: adding WAN server Node 15216.dc1 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/03/17 06:38:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:46 [INFO] raft: Node at 127.0.0.1:15217 [Candidate] entering Candidate state
2016/03/17 06:38:46 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:46 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:46 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:46 [INFO] raft: Node at 127.0.0.1:15217 [Leader] entering Leader state
2016/03/17 06:38:46 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:46 [INFO] consul: New leader elected: Node 15216
2016/03/17 06:38:46 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:46 [DEBUG] raft: Node 127.0.0.1:15217 updated peer set (2): [127.0.0.1:15217]
2016/03/17 06:38:46 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:46 [INFO] consul: member 'Node 15216' joined, marking health alive
2016/03/17 06:38:47 [INFO] consul: shutting down server
2016/03/17 06:38:47 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:47 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServices_Timeout (1.68s)
=== RUN   TestCatalogListServices_Stale
2016/03/17 06:38:47 [INFO] raft: Node at 127.0.0.1:15221 [Follower] entering Follower state
2016/03/17 06:38:47 [INFO] serf: EventMemberJoin: Node 15220 127.0.0.1
2016/03/17 06:38:47 [INFO] consul: adding LAN server Node 15220 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/03/17 06:38:47 [INFO] serf: EventMemberJoin: Node 15220.dc1 127.0.0.1
2016/03/17 06:38:47 [INFO] consul: adding WAN server Node 15220.dc1 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/03/17 06:38:47 [INFO] consul: shutting down server
2016/03/17 06:38:47 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:47 [INFO] raft: Node at 127.0.0.1:15221 [Candidate] entering Candidate state
2016/03/17 06:38:47 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:48 [DEBUG] raft: Votes needed: 1
--- PASS: TestCatalogListServices_Stale (1.02s)
=== RUN   TestCatalogListServiceNodes
2016/03/17 06:38:48 [INFO] raft: Node at 127.0.0.1:15225 [Follower] entering Follower state
2016/03/17 06:38:48 [INFO] serf: EventMemberJoin: Node 15224 127.0.0.1
2016/03/17 06:38:48 [INFO] consul: adding LAN server Node 15224 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/03/17 06:38:48 [INFO] serf: EventMemberJoin: Node 15224.dc1 127.0.0.1
2016/03/17 06:38:48 [INFO] consul: adding WAN server Node 15224.dc1 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/03/17 06:38:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:48 [INFO] raft: Node at 127.0.0.1:15225 [Candidate] entering Candidate state
2016/03/17 06:38:49 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:49 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:49 [INFO] raft: Node at 127.0.0.1:15225 [Leader] entering Leader state
2016/03/17 06:38:49 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:49 [INFO] consul: New leader elected: Node 15224
2016/03/17 06:38:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:49 [DEBUG] raft: Node 127.0.0.1:15225 updated peer set (2): [127.0.0.1:15225]
2016/03/17 06:38:49 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:49 [INFO] consul: member 'Node 15224' joined, marking health alive
2016/03/17 06:38:49 [INFO] consul: shutting down server
2016/03/17 06:38:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:50 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:38:50 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServiceNodes (1.78s)
=== RUN   TestCatalogListServiceNodes_DistanceSort
2016/03/17 06:38:50 [INFO] raft: Node at 127.0.0.1:15229 [Follower] entering Follower state
2016/03/17 06:38:50 [INFO] serf: EventMemberJoin: Node 15228 127.0.0.1
2016/03/17 06:38:50 [INFO] consul: adding LAN server Node 15228 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/03/17 06:38:50 [INFO] serf: EventMemberJoin: Node 15228.dc1 127.0.0.1
2016/03/17 06:38:50 [INFO] consul: adding WAN server Node 15228.dc1 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/03/17 06:38:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:50 [INFO] raft: Node at 127.0.0.1:15229 [Candidate] entering Candidate state
2016/03/17 06:38:51 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:51 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:51 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:51 [INFO] raft: Node at 127.0.0.1:15229 [Leader] entering Leader state
2016/03/17 06:38:51 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:51 [INFO] consul: New leader elected: Node 15228
2016/03/17 06:38:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:51 [DEBUG] raft: Node 127.0.0.1:15229 updated peer set (2): [127.0.0.1:15229]
2016/03/17 06:38:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:51 [INFO] consul: member 'Node 15228' joined, marking health alive
2016/03/17 06:38:51 [INFO] consul: shutting down server
2016/03/17 06:38:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:52 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListServiceNodes_DistanceSort (1.97s)
=== RUN   TestCatalogNodeServices
2016/03/17 06:38:52 [INFO] raft: Node at 127.0.0.1:15233 [Follower] entering Follower state
2016/03/17 06:38:52 [INFO] serf: EventMemberJoin: Node 15232 127.0.0.1
2016/03/17 06:38:52 [INFO] consul: adding LAN server Node 15232 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/03/17 06:38:52 [INFO] serf: EventMemberJoin: Node 15232.dc1 127.0.0.1
2016/03/17 06:38:52 [INFO] consul: adding WAN server Node 15232.dc1 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/03/17 06:38:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:52 [INFO] raft: Node at 127.0.0.1:15233 [Candidate] entering Candidate state
2016/03/17 06:38:53 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:53 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:53 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:53 [INFO] raft: Node at 127.0.0.1:15233 [Leader] entering Leader state
2016/03/17 06:38:53 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:53 [INFO] consul: New leader elected: Node 15232
2016/03/17 06:38:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:53 [DEBUG] raft: Node 127.0.0.1:15233 updated peer set (2): [127.0.0.1:15233]
2016/03/17 06:38:53 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:53 [INFO] consul: member 'Node 15232' joined, marking health alive
2016/03/17 06:38:53 [INFO] consul: shutting down server
2016/03/17 06:38:53 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:53 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:53 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogNodeServices (1.87s)
=== RUN   TestCatalogRegister_FailedCase1
2016/03/17 06:38:54 [INFO] raft: Node at 127.0.0.1:15237 [Follower] entering Follower state
2016/03/17 06:38:54 [INFO] serf: EventMemberJoin: Node 15236 127.0.0.1
2016/03/17 06:38:54 [INFO] consul: adding LAN server Node 15236 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/03/17 06:38:54 [INFO] serf: EventMemberJoin: Node 15236.dc1 127.0.0.1
2016/03/17 06:38:54 [INFO] consul: adding WAN server Node 15236.dc1 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/03/17 06:38:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:54 [INFO] raft: Node at 127.0.0.1:15237 [Candidate] entering Candidate state
2016/03/17 06:38:54 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:54 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:54 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:54 [INFO] raft: Node at 127.0.0.1:15237 [Leader] entering Leader state
2016/03/17 06:38:54 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:54 [INFO] consul: New leader elected: Node 15236
2016/03/17 06:38:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:55 [DEBUG] raft: Node 127.0.0.1:15237 updated peer set (2): [127.0.0.1:15237]
2016/03/17 06:38:55 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:55 [INFO] consul: member 'Node 15236' joined, marking health alive
2016/03/17 06:38:55 [INFO] consul: shutting down server
2016/03/17 06:38:55 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:56 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogRegister_FailedCase1 (2.31s)
=== RUN   TestCatalog_ListServices_FilterACL
2016/03/17 06:38:56 [INFO] raft: Node at 127.0.0.1:15241 [Follower] entering Follower state
2016/03/17 06:38:56 [INFO] serf: EventMemberJoin: Node 15240 127.0.0.1
2016/03/17 06:38:56 [INFO] consul: adding LAN server Node 15240 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/03/17 06:38:56 [INFO] serf: EventMemberJoin: Node 15240.dc1 127.0.0.1
2016/03/17 06:38:56 [INFO] consul: adding WAN server Node 15240.dc1 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/03/17 06:38:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:38:56 [INFO] raft: Node at 127.0.0.1:15241 [Candidate] entering Candidate state
2016/03/17 06:38:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:38:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:38:57 [INFO] raft: Election won. Tally: 1
2016/03/17 06:38:57 [INFO] raft: Node at 127.0.0.1:15241 [Leader] entering Leader state
2016/03/17 06:38:57 [INFO] consul: cluster leadership acquired
2016/03/17 06:38:57 [INFO] consul: New leader elected: Node 15240
2016/03/17 06:38:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:38:57 [DEBUG] raft: Node 127.0.0.1:15241 updated peer set (2): [127.0.0.1:15241]
2016/03/17 06:38:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:38:57 [INFO] consul: member 'Node 15240' joined, marking health alive
2016/03/17 06:38:59 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/03/17 06:38:59 [INFO] consul: shutting down server
2016/03/17 06:38:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:38:59 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalog_ListServices_FilterACL (3.69s)
=== RUN   TestCatalog_ServiceNodes_FilterACL
2016/03/17 06:39:00 [INFO] raft: Node at 127.0.0.1:15245 [Follower] entering Follower state
2016/03/17 06:39:00 [INFO] serf: EventMemberJoin: Node 15244 127.0.0.1
2016/03/17 06:39:00 [INFO] consul: adding LAN server Node 15244 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/03/17 06:39:00 [INFO] serf: EventMemberJoin: Node 15244.dc1 127.0.0.1
2016/03/17 06:39:00 [INFO] consul: adding WAN server Node 15244.dc1 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/03/17 06:39:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:00 [INFO] raft: Node at 127.0.0.1:15245 [Candidate] entering Candidate state
2016/03/17 06:39:00 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:00 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:00 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:00 [INFO] raft: Node at 127.0.0.1:15245 [Leader] entering Leader state
2016/03/17 06:39:00 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:00 [INFO] consul: New leader elected: Node 15244
2016/03/17 06:39:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:01 [DEBUG] raft: Node 127.0.0.1:15245 updated peer set (2): [127.0.0.1:15245]
2016/03/17 06:39:01 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:01 [INFO] consul: member 'Node 15244' joined, marking health alive
2016/03/17 06:39:03 [DEBUG] consul: dropping node "Node 15244" from result due to ACLs
2016/03/17 06:39:03 [INFO] consul: shutting down server
2016/03/17 06:39:03 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:03 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalog_ServiceNodes_FilterACL (3.75s)
=== RUN   TestCatalog_NodeServices_FilterACL
2016/03/17 06:39:04 [INFO] raft: Node at 127.0.0.1:15249 [Follower] entering Follower state
2016/03/17 06:39:04 [INFO] serf: EventMemberJoin: Node 15248 127.0.0.1
2016/03/17 06:39:04 [INFO] consul: adding LAN server Node 15248 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/03/17 06:39:04 [INFO] serf: EventMemberJoin: Node 15248.dc1 127.0.0.1
2016/03/17 06:39:04 [INFO] consul: adding WAN server Node 15248.dc1 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/03/17 06:39:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:04 [INFO] raft: Node at 127.0.0.1:15249 [Candidate] entering Candidate state
2016/03/17 06:39:04 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:04 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:04 [INFO] raft: Node at 127.0.0.1:15249 [Leader] entering Leader state
2016/03/17 06:39:04 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:04 [INFO] consul: New leader elected: Node 15248
2016/03/17 06:39:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:04 [DEBUG] raft: Node 127.0.0.1:15249 updated peer set (2): [127.0.0.1:15249]
2016/03/17 06:39:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:05 [INFO] consul: member 'Node 15248' joined, marking health alive
2016/03/17 06:39:06 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/03/17 06:39:06 [INFO] consul: shutting down server
2016/03/17 06:39:06 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:06 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:06 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:39:06 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalog_NodeServices_FilterACL (3.05s)
=== RUN   TestClient_StartStop
2016/03/17 06:39:06 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:06 [INFO] consul: shutting down client
2016/03/17 06:39:06 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_StartStop (0.06s)
=== RUN   TestClient_JoinLAN
2016/03/17 06:39:07 [INFO] raft: Node at 127.0.0.1:15255 [Follower] entering Follower state
2016/03/17 06:39:07 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/03/17 06:39:07 [INFO] consul: adding LAN server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/17 06:39:07 [INFO] serf: EventMemberJoin: Node 15254.dc1 127.0.0.1
2016/03/17 06:39:07 [INFO] consul: adding WAN server Node 15254.dc1 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/17 06:39:07 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15256
2016/03/17 06:39:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:53271
2016/03/17 06:39:07 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:07 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/03/17 06:39:07 [INFO] consul: adding server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/03/17 06:39:07 [INFO] consul: shutting down client
2016/03/17 06:39:07 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:07 [INFO] raft: Node at 127.0.0.1:15255 [Candidate] entering Candidate state
2016/03/17 06:39:07 [INFO] consul: shutting down server
2016/03/17 06:39:07 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:07 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:39:07 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:39:07 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:07 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/17 06:39:07 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/17 06:39:07 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN (1.08s)
=== RUN   TestClient_JoinLAN_Invalid
2016/03/17 06:39:08 [INFO] raft: Node at 127.0.0.1:15261 [Follower] entering Follower state
2016/03/17 06:39:08 [INFO] serf: EventMemberJoin: Node 15260 127.0.0.1
2016/03/17 06:39:08 [INFO] consul: adding LAN server Node 15260 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/03/17 06:39:08 [INFO] serf: EventMemberJoin: Node 15260.dc1 127.0.0.1
2016/03/17 06:39:08 [INFO] consul: adding WAN server Node 15260.dc1 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/03/17 06:39:08 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15262
2016/03/17 06:39:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:46876
2016/03/17 06:39:08 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' part of wrong datacenter 'other' from=127.0.0.1:46876
2016/03/17 06:39:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:08 [INFO] raft: Node at 127.0.0.1:15261 [Candidate] entering Candidate state
2016/03/17 06:39:08 [INFO] consul: shutting down client
2016/03/17 06:39:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:08 [INFO] consul: shutting down server
2016/03/17 06:39:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:08 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN_Invalid (1.12s)
=== RUN   TestClient_JoinWAN_Invalid
2016/03/17 06:39:09 [INFO] raft: Node at 127.0.0.1:15267 [Follower] entering Follower state
2016/03/17 06:39:09 [INFO] serf: EventMemberJoin: Node 15266 127.0.0.1
2016/03/17 06:39:09 [INFO] consul: adding LAN server Node 15266 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/03/17 06:39:09 [INFO] serf: EventMemberJoin: Node 15266.dc1 127.0.0.1
2016/03/17 06:39:09 [INFO] consul: adding WAN server Node 15266.dc1 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/03/17 06:39:09 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:09 [DEBUG] memberlist: TCP connection from=127.0.0.1:33879
2016/03/17 06:39:09 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15269
2016/03/17 06:39:09 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' is not a server from=127.0.0.1:33879
2016/03/17 06:39:09 [INFO] consul: shutting down client
2016/03/17 06:39:09 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:09 [INFO] raft: Node at 127.0.0.1:15267 [Candidate] entering Candidate state
2016/03/17 06:39:09 [INFO] consul: shutting down server
2016/03/17 06:39:09 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:09 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:10 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinWAN_Invalid (1.13s)
=== RUN   TestClient_RPC
2016/03/17 06:39:10 [INFO] raft: Node at 127.0.0.1:15273 [Follower] entering Follower state
2016/03/17 06:39:10 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/03/17 06:39:10 [INFO] consul: adding LAN server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/17 06:39:10 [INFO] serf: EventMemberJoin: Node 15272.dc1 127.0.0.1
2016/03/17 06:39:10 [INFO] consul: adding WAN server Node 15272.dc1 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/17 06:39:10 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:10 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15274
2016/03/17 06:39:10 [DEBUG] memberlist: TCP connection from=127.0.0.1:41537
2016/03/17 06:39:10 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:10 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/03/17 06:39:10 [INFO] consul: adding server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/03/17 06:39:10 [INFO] consul: shutting down client
2016/03/17 06:39:10 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:10 [INFO] raft: Node at 127.0.0.1:15273 [Candidate] entering Candidate state
2016/03/17 06:39:10 [INFO] consul: shutting down server
2016/03/17 06:39:10 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:10 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:11 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC (1.11s)
=== RUN   TestClient_RPC_Pool
2016/03/17 06:39:11 [INFO] raft: Node at 127.0.0.1:15279 [Follower] entering Follower state
2016/03/17 06:39:11 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/03/17 06:39:11 [INFO] consul: adding LAN server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/17 06:39:11 [INFO] serf: EventMemberJoin: Node 15278.dc1 127.0.0.1
2016/03/17 06:39:11 [INFO] consul: adding WAN server Node 15278.dc1 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/17 06:39:11 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:11 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15280
2016/03/17 06:39:11 [DEBUG] memberlist: TCP connection from=127.0.0.1:53166
2016/03/17 06:39:11 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:11 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/03/17 06:39:11 [INFO] consul: adding server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/03/17 06:39:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:11 [INFO] raft: Node at 127.0.0.1:15279 [Candidate] entering Candidate state
2016/03/17 06:39:11 [INFO] consul: shutting down client
2016/03/17 06:39:11 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:11 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:39:11 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:39:11 [INFO] consul: shutting down server
2016/03/17 06:39:11 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:12 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC_Pool (1.17s)
=== RUN   TestClient_RPC_TLS
2016/03/17 06:39:12 [INFO] raft: Node at 127.0.0.1:15284 [Follower] entering Follower state
2016/03/17 06:39:12 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/17 06:39:12 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/17 06:39:12 [INFO] serf: EventMemberJoin: a.testco.internal.dc1 127.0.0.1
2016/03/17 06:39:12 [INFO] consul: adding WAN server a.testco.internal.dc1 (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/17 06:39:12 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/17 06:39:12 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/03/17 06:39:12 [DEBUG] memberlist: TCP connection from=127.0.0.1:36470
2016/03/17 06:39:12 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/17 06:39:12 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/17 06:39:12 [INFO] consul: adding server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/03/17 06:39:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:12 [INFO] raft: Node at 127.0.0.1:15284 [Candidate] entering Candidate state
2016/03/17 06:39:12 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:39:13 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:39:13 [INFO] consul: shutting down client
2016/03/17 06:39:13 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:13 [INFO] consul: shutting down server
2016/03/17 06:39:13 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:13 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:13 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC_TLS (1.22s)
=== RUN   TestClientServer_UserEvent
2016/03/17 06:39:13 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/03/17 06:39:14 [INFO] raft: Node at 127.0.0.1:15293 [Follower] entering Follower state
2016/03/17 06:39:14 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/03/17 06:39:14 [INFO] consul: adding LAN server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/17 06:39:14 [INFO] serf: EventMemberJoin: Node 15292.dc1 127.0.0.1
2016/03/17 06:39:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15294
2016/03/17 06:39:14 [INFO] consul: adding WAN server Node 15292.dc1 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/17 06:39:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:38252
2016/03/17 06:39:14 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/03/17 06:39:14 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/03/17 06:39:14 [INFO] consul: adding server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/03/17 06:39:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:14 [INFO] raft: Node at 127.0.0.1:15293 [Candidate] entering Candidate state
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] serf: messageJoinType: Client 15289
2016/03/17 06:39:14 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:14 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:14 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:14 [INFO] raft: Node at 127.0.0.1:15293 [Leader] entering Leader state
2016/03/17 06:39:14 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:14 [INFO] consul: New leader elected: Node 15292
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [INFO] consul: New leader elected: Node 15292
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:14 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:39:15 [DEBUG] raft: Node 127.0.0.1:15293 updated peer set (2): [127.0.0.1:15293]
2016/03/17 06:39:15 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:15 [INFO] consul: member 'Node 15292' joined, marking health alive
2016/03/17 06:39:15 [INFO] consul: member 'Client 15289' joined, marking health alive
2016/03/17 06:39:15 [DEBUG] consul: user event: foo
2016/03/17 06:39:15 [DEBUG] serf: messageUserEventType: consul:event:foo
2016/03/17 06:39:15 [DEBUG] consul: user event: foo
2016/03/17 06:39:15 [INFO] consul: shutting down server
2016/03/17 06:39:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:15 [DEBUG] serf: messageUserEventType: consul:event:foo
2016/03/17 06:39:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:15 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:39:15 [INFO] consul: shutting down client
2016/03/17 06:39:15 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientServer_UserEvent (2.07s)
=== RUN   TestClient_Encrypted
2016/03/17 06:39:15 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:39:15 [INFO] serf: EventMemberJoin: Client 15298 127.0.0.1
2016/03/17 06:39:15 [INFO] consul: shutting down client
2016/03/17 06:39:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:15 [INFO] consul: shutting down client
2016/03/17 06:39:15 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_Encrypted (0.12s)
=== RUN   TestCoordinate_Update
2016/03/17 06:39:16 [INFO] raft: Node at 127.0.0.1:15302 [Follower] entering Follower state
2016/03/17 06:39:16 [INFO] serf: EventMemberJoin: Node 15301 127.0.0.1
2016/03/17 06:39:16 [INFO] consul: adding LAN server Node 15301 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/03/17 06:39:16 [INFO] serf: EventMemberJoin: Node 15301.dc1 127.0.0.1
2016/03/17 06:39:16 [INFO] consul: adding WAN server Node 15301.dc1 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/03/17 06:39:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:16 [INFO] raft: Node at 127.0.0.1:15302 [Candidate] entering Candidate state
2016/03/17 06:39:17 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:17 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:17 [INFO] raft: Node at 127.0.0.1:15302 [Leader] entering Leader state
2016/03/17 06:39:17 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:17 [INFO] consul: New leader elected: Node 15301
2016/03/17 06:39:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:17 [DEBUG] raft: Node 127.0.0.1:15302 updated peer set (2): [127.0.0.1:15302]
2016/03/17 06:39:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:17 [INFO] consul: member 'Node 15301' joined, marking health alive
2016/03/17 06:39:23 [WARN] consul.coordinate: Discarded 1 coordinate updates
2016/03/17 06:39:24 [INFO] consul: shutting down server
2016/03/17 06:39:24 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:24 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoordinate_Update (8.78s)
=== RUN   TestCoordinate_ListDatacenters
2016/03/17 06:39:25 [INFO] raft: Node at 127.0.0.1:15306 [Follower] entering Follower state
2016/03/17 06:39:25 [INFO] serf: EventMemberJoin: Node 15305 127.0.0.1
2016/03/17 06:39:25 [INFO] serf: EventMemberJoin: Node 15305.dc1 127.0.0.1
2016/03/17 06:39:25 [INFO] consul: adding LAN server Node 15305 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/03/17 06:39:25 [INFO] consul: adding WAN server Node 15305.dc1 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/03/17 06:39:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:25 [INFO] raft: Node at 127.0.0.1:15306 [Candidate] entering Candidate state
2016/03/17 06:39:25 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:25 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:25 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:25 [INFO] raft: Node at 127.0.0.1:15306 [Leader] entering Leader state
2016/03/17 06:39:25 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:25 [INFO] consul: New leader elected: Node 15305
2016/03/17 06:39:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:25 [DEBUG] raft: Node 127.0.0.1:15306 updated peer set (2): [127.0.0.1:15306]
2016/03/17 06:39:25 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:25 [INFO] consul: member 'Node 15305' joined, marking health alive
2016/03/17 06:39:26 [INFO] consul: shutting down server
2016/03/17 06:39:26 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:26 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoordinate_ListDatacenters (1.82s)
=== RUN   TestCoordinate_ListNodes
2016/03/17 06:39:26 [INFO] raft: Node at 127.0.0.1:15310 [Follower] entering Follower state
2016/03/17 06:39:26 [INFO] serf: EventMemberJoin: Node 15309 127.0.0.1
2016/03/17 06:39:26 [INFO] consul: adding LAN server Node 15309 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/03/17 06:39:26 [INFO] serf: EventMemberJoin: Node 15309.dc1 127.0.0.1
2016/03/17 06:39:26 [INFO] consul: adding WAN server Node 15309.dc1 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/03/17 06:39:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:26 [INFO] raft: Node at 127.0.0.1:15310 [Candidate] entering Candidate state
2016/03/17 06:39:27 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:27 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:27 [INFO] raft: Node at 127.0.0.1:15310 [Leader] entering Leader state
2016/03/17 06:39:27 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:27 [INFO] consul: New leader elected: Node 15309
2016/03/17 06:39:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:27 [DEBUG] raft: Node 127.0.0.1:15310 updated peer set (2): [127.0.0.1:15310]
2016/03/17 06:39:27 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:27 [INFO] consul: member 'Node 15309' joined, marking health alive
2016/03/17 06:39:29 [INFO] consul: shutting down server
2016/03/17 06:39:29 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:29 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:29 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:39:29 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestCoordinate_ListNodes (2.93s)
	coordinate_endpoint_test.go:286: bad: []
=== RUN   TestFilterDirEnt
--- PASS: TestFilterDirEnt (0.00s)
=== RUN   TestKeys
--- PASS: TestKeys (0.00s)
=== RUN   TestFSM_RegisterNode
--- PASS: TestFSM_RegisterNode (0.00s)
=== RUN   TestFSM_RegisterNode_Service
--- PASS: TestFSM_RegisterNode_Service (0.00s)
=== RUN   TestFSM_DeregisterService
--- PASS: TestFSM_DeregisterService (0.00s)
=== RUN   TestFSM_DeregisterCheck
--- PASS: TestFSM_DeregisterCheck (0.00s)
=== RUN   TestFSM_DeregisterNode
--- PASS: TestFSM_DeregisterNode (0.00s)
=== RUN   TestFSM_SnapshotRestore
2016/03/17 06:39:29 [INFO] consul.fsm: snapshot created in 178µs
--- PASS: TestFSM_SnapshotRestore (0.01s)
=== RUN   TestFSM_KVSSet
--- PASS: TestFSM_KVSSet (0.00s)
=== RUN   TestFSM_KVSDelete
--- PASS: TestFSM_KVSDelete (0.00s)
=== RUN   TestFSM_KVSDeleteTree
--- PASS: TestFSM_KVSDeleteTree (0.00s)
=== RUN   TestFSM_KVSDeleteCheckAndSet
--- PASS: TestFSM_KVSDeleteCheckAndSet (0.00s)
=== RUN   TestFSM_KVSCheckAndSet
--- PASS: TestFSM_KVSCheckAndSet (0.00s)
=== RUN   TestFSM_CoordinateUpdate
--- PASS: TestFSM_CoordinateUpdate (0.00s)
=== RUN   TestFSM_SessionCreate_Destroy
--- PASS: TestFSM_SessionCreate_Destroy (0.00s)
=== RUN   TestFSM_KVSLock
--- PASS: TestFSM_KVSLock (0.00s)
=== RUN   TestFSM_KVSUnlock
--- PASS: TestFSM_KVSUnlock (0.00s)
=== RUN   TestFSM_ACL_Set_Delete
--- PASS: TestFSM_ACL_Set_Delete (0.00s)
=== RUN   TestFSM_PreparedQuery_CRUD
--- PASS: TestFSM_PreparedQuery_CRUD (0.01s)
=== RUN   TestFSM_TombstoneReap
--- PASS: TestFSM_TombstoneReap (0.00s)
=== RUN   TestFSM_IgnoreUnknown
2016/03/17 06:39:29 [WARN] consul.fsm: ignoring unknown message type (64), upgrade to newer version
--- PASS: TestFSM_IgnoreUnknown (0.00s)
=== RUN   TestHealth_ChecksInState
2016/03/17 06:39:29 [INFO] raft: Node at 127.0.0.1:15314 [Follower] entering Follower state
2016/03/17 06:39:29 [INFO] serf: EventMemberJoin: Node 15313 127.0.0.1
2016/03/17 06:39:29 [INFO] consul: adding LAN server Node 15313 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/03/17 06:39:29 [INFO] serf: EventMemberJoin: Node 15313.dc1 127.0.0.1
2016/03/17 06:39:29 [INFO] consul: adding WAN server Node 15313.dc1 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/03/17 06:39:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:30 [INFO] raft: Node at 127.0.0.1:15314 [Candidate] entering Candidate state
2016/03/17 06:39:30 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:30 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:30 [INFO] raft: Node at 127.0.0.1:15314 [Leader] entering Leader state
2016/03/17 06:39:30 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:30 [INFO] consul: New leader elected: Node 15313
2016/03/17 06:39:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:30 [DEBUG] raft: Node 127.0.0.1:15314 updated peer set (2): [127.0.0.1:15314]
2016/03/17 06:39:30 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:30 [INFO] consul: member 'Node 15313' joined, marking health alive
2016/03/17 06:39:31 [INFO] consul: shutting down server
2016/03/17 06:39:31 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:31 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ChecksInState (2.80s)
=== RUN   TestHealth_ChecksInState_DistanceSort
2016/03/17 06:39:32 [INFO] raft: Node at 127.0.0.1:15318 [Follower] entering Follower state
2016/03/17 06:39:32 [INFO] serf: EventMemberJoin: Node 15317 127.0.0.1
2016/03/17 06:39:32 [INFO] consul: adding LAN server Node 15317 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/03/17 06:39:32 [INFO] serf: EventMemberJoin: Node 15317.dc1 127.0.0.1
2016/03/17 06:39:32 [INFO] consul: adding WAN server Node 15317.dc1 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/03/17 06:39:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:32 [INFO] raft: Node at 127.0.0.1:15318 [Candidate] entering Candidate state
2016/03/17 06:39:33 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:33 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:33 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:33 [INFO] raft: Node at 127.0.0.1:15318 [Leader] entering Leader state
2016/03/17 06:39:33 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:33 [INFO] consul: New leader elected: Node 15317
2016/03/17 06:39:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:33 [DEBUG] raft: Node 127.0.0.1:15318 updated peer set (2): [127.0.0.1:15318]
2016/03/17 06:39:33 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:33 [INFO] consul: member 'Node 15317' joined, marking health alive
2016/03/17 06:39:34 [INFO] consul: shutting down server
2016/03/17 06:39:34 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:34 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:34 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ChecksInState_DistanceSort (2.42s)
=== RUN   TestHealth_NodeChecks
2016/03/17 06:39:35 [INFO] raft: Node at 127.0.0.1:15322 [Follower] entering Follower state
2016/03/17 06:39:35 [INFO] serf: EventMemberJoin: Node 15321 127.0.0.1
2016/03/17 06:39:35 [INFO] consul: adding LAN server Node 15321 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/03/17 06:39:35 [INFO] serf: EventMemberJoin: Node 15321.dc1 127.0.0.1
2016/03/17 06:39:35 [INFO] consul: adding WAN server Node 15321.dc1 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/03/17 06:39:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:35 [INFO] raft: Node at 127.0.0.1:15322 [Candidate] entering Candidate state
2016/03/17 06:39:35 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:35 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:35 [INFO] raft: Node at 127.0.0.1:15322 [Leader] entering Leader state
2016/03/17 06:39:35 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:35 [INFO] consul: New leader elected: Node 15321
2016/03/17 06:39:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:35 [DEBUG] raft: Node 127.0.0.1:15322 updated peer set (2): [127.0.0.1:15322]
2016/03/17 06:39:35 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:35 [INFO] consul: member 'Node 15321' joined, marking health alive
2016/03/17 06:39:36 [INFO] consul: shutting down server
2016/03/17 06:39:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:36 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_NodeChecks (2.08s)
=== RUN   TestHealth_ServiceChecks
2016/03/17 06:39:37 [INFO] raft: Node at 127.0.0.1:15326 [Follower] entering Follower state
2016/03/17 06:39:37 [INFO] serf: EventMemberJoin: Node 15325 127.0.0.1
2016/03/17 06:39:37 [INFO] consul: adding LAN server Node 15325 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/03/17 06:39:37 [INFO] serf: EventMemberJoin: Node 15325.dc1 127.0.0.1
2016/03/17 06:39:37 [INFO] consul: adding WAN server Node 15325.dc1 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/03/17 06:39:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:37 [INFO] raft: Node at 127.0.0.1:15326 [Candidate] entering Candidate state
2016/03/17 06:39:37 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:37 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:37 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:37 [INFO] raft: Node at 127.0.0.1:15326 [Leader] entering Leader state
2016/03/17 06:39:37 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:37 [INFO] consul: New leader elected: Node 15325
2016/03/17 06:39:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:38 [DEBUG] raft: Node 127.0.0.1:15326 updated peer set (2): [127.0.0.1:15326]
2016/03/17 06:39:38 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:38 [INFO] consul: member 'Node 15325' joined, marking health alive
2016/03/17 06:39:38 [INFO] consul: shutting down server
2016/03/17 06:39:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:39 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:39:39 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceChecks (2.32s)
=== RUN   TestHealth_ServiceChecks_DistanceSort
2016/03/17 06:39:39 [INFO] raft: Node at 127.0.0.1:15330 [Follower] entering Follower state
2016/03/17 06:39:39 [INFO] serf: EventMemberJoin: Node 15329 127.0.0.1
2016/03/17 06:39:39 [INFO] consul: adding LAN server Node 15329 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/03/17 06:39:39 [INFO] serf: EventMemberJoin: Node 15329.dc1 127.0.0.1
2016/03/17 06:39:39 [INFO] consul: adding WAN server Node 15329.dc1 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/03/17 06:39:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:39 [INFO] raft: Node at 127.0.0.1:15330 [Candidate] entering Candidate state
2016/03/17 06:39:39 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:39 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:39 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:39 [INFO] raft: Node at 127.0.0.1:15330 [Leader] entering Leader state
2016/03/17 06:39:39 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:39 [INFO] consul: New leader elected: Node 15329
2016/03/17 06:39:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:40 [DEBUG] raft: Node 127.0.0.1:15330 updated peer set (2): [127.0.0.1:15330]
2016/03/17 06:39:40 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:40 [INFO] consul: member 'Node 15329' joined, marking health alive
2016/03/17 06:39:41 [INFO] consul: shutting down server
2016/03/17 06:39:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceChecks_DistanceSort (2.49s)
=== RUN   TestHealth_ServiceNodes
2016/03/17 06:39:41 [INFO] raft: Node at 127.0.0.1:15334 [Follower] entering Follower state
2016/03/17 06:39:41 [INFO] serf: EventMemberJoin: Node 15333 127.0.0.1
2016/03/17 06:39:41 [INFO] consul: adding LAN server Node 15333 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/03/17 06:39:41 [INFO] serf: EventMemberJoin: Node 15333.dc1 127.0.0.1
2016/03/17 06:39:41 [INFO] consul: adding WAN server Node 15333.dc1 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/03/17 06:39:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:42 [INFO] raft: Node at 127.0.0.1:15334 [Candidate] entering Candidate state
2016/03/17 06:39:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:42 [INFO] raft: Node at 127.0.0.1:15334 [Leader] entering Leader state
2016/03/17 06:39:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:42 [INFO] consul: New leader elected: Node 15333
2016/03/17 06:39:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:42 [DEBUG] raft: Node 127.0.0.1:15334 updated peer set (2): [127.0.0.1:15334]
2016/03/17 06:39:42 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:42 [INFO] consul: member 'Node 15333' joined, marking health alive
2016/03/17 06:39:43 [INFO] consul: shutting down server
2016/03/17 06:39:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:44 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:39:44 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceNodes (2.53s)
=== RUN   TestHealth_ServiceNodes_DistanceSort
2016/03/17 06:39:44 [INFO] raft: Node at 127.0.0.1:15338 [Follower] entering Follower state
2016/03/17 06:39:44 [INFO] serf: EventMemberJoin: Node 15337 127.0.0.1
2016/03/17 06:39:44 [INFO] consul: adding LAN server Node 15337 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/03/17 06:39:44 [INFO] serf: EventMemberJoin: Node 15337.dc1 127.0.0.1
2016/03/17 06:39:44 [INFO] consul: adding WAN server Node 15337.dc1 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/03/17 06:39:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:44 [INFO] raft: Node at 127.0.0.1:15338 [Candidate] entering Candidate state
2016/03/17 06:39:45 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:45 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:45 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:45 [INFO] raft: Node at 127.0.0.1:15338 [Leader] entering Leader state
2016/03/17 06:39:45 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:45 [INFO] consul: New leader elected: Node 15337
2016/03/17 06:39:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:45 [DEBUG] raft: Node 127.0.0.1:15338 updated peer set (2): [127.0.0.1:15338]
2016/03/17 06:39:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:45 [INFO] consul: member 'Node 15337' joined, marking health alive
2016/03/17 06:39:46 [INFO] consul: shutting down server
2016/03/17 06:39:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceNodes_DistanceSort (2.72s)
=== RUN   TestHealth_NodeChecks_FilterACL
2016/03/17 06:39:47 [INFO] raft: Node at 127.0.0.1:15342 [Follower] entering Follower state
2016/03/17 06:39:47 [INFO] serf: EventMemberJoin: Node 15341 127.0.0.1
2016/03/17 06:39:47 [INFO] consul: adding LAN server Node 15341 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/03/17 06:39:47 [INFO] serf: EventMemberJoin: Node 15341.dc1 127.0.0.1
2016/03/17 06:39:47 [INFO] consul: adding WAN server Node 15341.dc1 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/03/17 06:39:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:47 [INFO] raft: Node at 127.0.0.1:15342 [Candidate] entering Candidate state
2016/03/17 06:39:47 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:47 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:47 [INFO] raft: Node at 127.0.0.1:15342 [Leader] entering Leader state
2016/03/17 06:39:47 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:47 [INFO] consul: New leader elected: Node 15341
2016/03/17 06:39:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:48 [DEBUG] raft: Node 127.0.0.1:15342 updated peer set (2): [127.0.0.1:15342]
2016/03/17 06:39:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:48 [INFO] consul: member 'Node 15341' joined, marking health alive
2016/03/17 06:39:49 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/17 06:39:49 [INFO] consul: shutting down server
2016/03/17 06:39:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:49 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_NodeChecks_FilterACL (3.21s)
=== RUN   TestHealth_ServiceChecks_FilterACL
2016/03/17 06:39:50 [INFO] raft: Node at 127.0.0.1:15346 [Follower] entering Follower state
2016/03/17 06:39:50 [INFO] serf: EventMemberJoin: Node 15345 127.0.0.1
2016/03/17 06:39:50 [INFO] consul: adding LAN server Node 15345 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/03/17 06:39:50 [INFO] serf: EventMemberJoin: Node 15345.dc1 127.0.0.1
2016/03/17 06:39:50 [INFO] consul: adding WAN server Node 15345.dc1 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/03/17 06:39:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:50 [INFO] raft: Node at 127.0.0.1:15346 [Candidate] entering Candidate state
2016/03/17 06:39:50 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:50 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:50 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:50 [INFO] raft: Node at 127.0.0.1:15346 [Leader] entering Leader state
2016/03/17 06:39:50 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:50 [INFO] consul: New leader elected: Node 15345
2016/03/17 06:39:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:51 [DEBUG] raft: Node 127.0.0.1:15346 updated peer set (2): [127.0.0.1:15346]
2016/03/17 06:39:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:51 [INFO] consul: member 'Node 15345' joined, marking health alive
2016/03/17 06:39:52 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/17 06:39:52 [INFO] consul: shutting down server
2016/03/17 06:39:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:53 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ServiceChecks_FilterACL (3.15s)
=== RUN   TestHealth_ServiceNodes_FilterACL
2016/03/17 06:39:53 [INFO] raft: Node at 127.0.0.1:15350 [Follower] entering Follower state
2016/03/17 06:39:53 [INFO] serf: EventMemberJoin: Node 15349 127.0.0.1
2016/03/17 06:39:53 [INFO] consul: adding LAN server Node 15349 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/03/17 06:39:53 [INFO] serf: EventMemberJoin: Node 15349.dc1 127.0.0.1
2016/03/17 06:39:53 [INFO] consul: adding WAN server Node 15349.dc1 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/03/17 06:39:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:53 [INFO] raft: Node at 127.0.0.1:15350 [Candidate] entering Candidate state
2016/03/17 06:39:54 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:54 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:54 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:54 [INFO] raft: Node at 127.0.0.1:15350 [Leader] entering Leader state
2016/03/17 06:39:54 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:54 [INFO] consul: New leader elected: Node 15349
2016/03/17 06:39:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:54 [DEBUG] raft: Node 127.0.0.1:15350 updated peer set (2): [127.0.0.1:15350]
2016/03/17 06:39:54 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:54 [INFO] consul: member 'Node 15349' joined, marking health alive
2016/03/17 06:39:56 [DEBUG] consul: dropping node "Node 15349" from result due to ACLs
2016/03/17 06:39:56 [INFO] consul: shutting down server
2016/03/17 06:39:56 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:56 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ServiceNodes_FilterACL (3.38s)
=== RUN   TestHealth_ChecksInState_FilterACL
2016/03/17 06:39:56 [INFO] raft: Node at 127.0.0.1:15354 [Follower] entering Follower state
2016/03/17 06:39:57 [INFO] serf: EventMemberJoin: Node 15353 127.0.0.1
2016/03/17 06:39:57 [INFO] consul: adding LAN server Node 15353 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/03/17 06:39:57 [INFO] serf: EventMemberJoin: Node 15353.dc1 127.0.0.1
2016/03/17 06:39:57 [INFO] consul: adding WAN server Node 15353.dc1 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/03/17 06:39:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:39:57 [INFO] raft: Node at 127.0.0.1:15354 [Candidate] entering Candidate state
2016/03/17 06:39:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:39:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:39:57 [INFO] raft: Election won. Tally: 1
2016/03/17 06:39:57 [INFO] raft: Node at 127.0.0.1:15354 [Leader] entering Leader state
2016/03/17 06:39:57 [INFO] consul: cluster leadership acquired
2016/03/17 06:39:57 [INFO] consul: New leader elected: Node 15353
2016/03/17 06:39:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:39:57 [DEBUG] raft: Node 127.0.0.1:15354 updated peer set (2): [127.0.0.1:15354]
2016/03/17 06:39:57 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:39:58 [INFO] consul: member 'Node 15353' joined, marking health alive
2016/03/17 06:39:59 [INFO] consul: shutting down server
2016/03/17 06:39:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:39:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ChecksInState_FilterACL (3.17s)
=== RUN   TestInternal_NodeInfo
2016/03/17 06:40:00 [INFO] raft: Node at 127.0.0.1:15358 [Follower] entering Follower state
2016/03/17 06:40:00 [INFO] serf: EventMemberJoin: Node 15357 127.0.0.1
2016/03/17 06:40:00 [INFO] consul: adding LAN server Node 15357 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/03/17 06:40:00 [INFO] serf: EventMemberJoin: Node 15357.dc1 127.0.0.1
2016/03/17 06:40:00 [INFO] consul: adding WAN server Node 15357.dc1 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/03/17 06:40:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:00 [INFO] raft: Node at 127.0.0.1:15358 [Candidate] entering Candidate state
2016/03/17 06:40:00 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:00 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:00 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:00 [INFO] raft: Node at 127.0.0.1:15358 [Leader] entering Leader state
2016/03/17 06:40:00 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:00 [INFO] consul: New leader elected: Node 15357
2016/03/17 06:40:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:01 [DEBUG] raft: Node 127.0.0.1:15358 updated peer set (2): [127.0.0.1:15358]
2016/03/17 06:40:01 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:01 [INFO] consul: member 'Node 15357' joined, marking health alive
2016/03/17 06:40:01 [INFO] consul: shutting down server
2016/03/17 06:40:01 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:01 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:01 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:40:01 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_NodeInfo (2.28s)
=== RUN   TestInternal_NodeDump
2016/03/17 06:40:02 [INFO] raft: Node at 127.0.0.1:15362 [Follower] entering Follower state
2016/03/17 06:40:02 [INFO] serf: EventMemberJoin: Node 15361 127.0.0.1
2016/03/17 06:40:02 [INFO] consul: adding LAN server Node 15361 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/03/17 06:40:02 [INFO] serf: EventMemberJoin: Node 15361.dc1 127.0.0.1
2016/03/17 06:40:02 [INFO] consul: adding WAN server Node 15361.dc1 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/03/17 06:40:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:02 [INFO] raft: Node at 127.0.0.1:15362 [Candidate] entering Candidate state
2016/03/17 06:40:03 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:03 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:03 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:03 [INFO] raft: Node at 127.0.0.1:15362 [Leader] entering Leader state
2016/03/17 06:40:03 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:03 [INFO] consul: New leader elected: Node 15361
2016/03/17 06:40:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:03 [DEBUG] raft: Node 127.0.0.1:15362 updated peer set (2): [127.0.0.1:15362]
2016/03/17 06:40:03 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:03 [INFO] consul: member 'Node 15361' joined, marking health alive
2016/03/17 06:40:04 [INFO] consul: shutting down server
2016/03/17 06:40:04 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:04 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:04 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_NodeDump (2.96s)
=== RUN   TestInternal_KeyringOperation
2016/03/17 06:40:05 [INFO] raft: Node at 127.0.0.1:15366 [Follower] entering Follower state
2016/03/17 06:40:05 [INFO] serf: EventMemberJoin: Node 15365 127.0.0.1
2016/03/17 06:40:05 [INFO] consul: adding LAN server Node 15365 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/17 06:40:05 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/03/17 06:40:05 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/17 06:40:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:05 [INFO] raft: Node at 127.0.0.1:15366 [Candidate] entering Candidate state
2016/03/17 06:40:05 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:05 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:05 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:05 [INFO] raft: Node at 127.0.0.1:15366 [Leader] entering Leader state
2016/03/17 06:40:05 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:05 [INFO] consul: New leader elected: Node 15365
2016/03/17 06:40:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:06 [DEBUG] raft: Node 127.0.0.1:15366 updated peer set (2): [127.0.0.1:15366]
2016/03/17 06:40:06 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:06 [INFO] consul: member 'Node 15365' joined, marking health alive
2016/03/17 06:40:06 [INFO] serf: Received list-keys query
2016/03/17 06:40:06 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/03/17 06:40:06 [INFO] serf: Received list-keys query
2016/03/17 06:40:06 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/03/17 06:40:07 [INFO] raft: Node at 127.0.0.1:15370 [Follower] entering Follower state
2016/03/17 06:40:07 [INFO] serf: EventMemberJoin: Node 15369 127.0.0.1
2016/03/17 06:40:07 [INFO] consul: adding LAN server Node 15369 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/17 06:40:07 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/03/17 06:40:07 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/17 06:40:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:57702
2016/03/17 06:40:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15368
2016/03/17 06:40:07 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/03/17 06:40:07 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/17 06:40:07 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/03/17 06:40:07 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/03/17 06:40:07 [INFO] serf: Received list-keys query
2016/03/17 06:40:07 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [INFO] serf: Received list-keys query
2016/03/17 06:40:07 [INFO] serf: Received list-keys query
2016/03/17 06:40:07 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/03/17 06:40:07 [INFO] serf: Received list-keys query
2016/03/17 06:40:07 [INFO] serf: Received list-keys query
2016/03/17 06:40:07 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/03/17 06:40:07 [DEBUG] serf: messageQueryResponseType: Node 15369
2016/03/17 06:40:07 [INFO] consul: shutting down server
2016/03/17 06:40:07 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:07 [INFO] raft: Node at 127.0.0.1:15370 [Candidate] entering Candidate state
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/03/17 06:40:07 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:07 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/03/17 06:40:07 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/03/17 06:40:07 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/03/17 06:40:07 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/03/17 06:40:07 [INFO] memberlist: Marking Node 15369.dc2 as failed, suspect timeout reached
2016/03/17 06:40:07 [INFO] serf: EventMemberFailed: Node 15369.dc2 127.0.0.1
2016/03/17 06:40:07 [INFO] consul: removing WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/03/17 06:40:08 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:08 [INFO] consul: shutting down server
2016/03/17 06:40:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:08 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:08 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_KeyringOperation (3.64s)
=== RUN   TestInternal_NodeInfo_FilterACL
2016/03/17 06:40:09 [INFO] raft: Node at 127.0.0.1:15374 [Follower] entering Follower state
2016/03/17 06:40:09 [INFO] serf: EventMemberJoin: Node 15373 127.0.0.1
2016/03/17 06:40:09 [INFO] consul: adding LAN server Node 15373 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/03/17 06:40:09 [INFO] serf: EventMemberJoin: Node 15373.dc1 127.0.0.1
2016/03/17 06:40:09 [INFO] consul: adding WAN server Node 15373.dc1 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/03/17 06:40:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:09 [INFO] raft: Node at 127.0.0.1:15374 [Candidate] entering Candidate state
2016/03/17 06:40:09 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:09 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:09 [INFO] raft: Node at 127.0.0.1:15374 [Leader] entering Leader state
2016/03/17 06:40:09 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:09 [INFO] consul: New leader elected: Node 15373
2016/03/17 06:40:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:09 [DEBUG] raft: Node 127.0.0.1:15374 updated peer set (2): [127.0.0.1:15374]
2016/03/17 06:40:09 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:10 [INFO] consul: member 'Node 15373' joined, marking health alive
2016/03/17 06:40:11 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/03/17 06:40:11 [INFO] consul: shutting down server
2016/03/17 06:40:11 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestInternal_NodeInfo_FilterACL (3.28s)
=== RUN   TestInternal_NodeDump_FilterACL
2016/03/17 06:40:12 [INFO] raft: Node at 127.0.0.1:15378 [Follower] entering Follower state
2016/03/17 06:40:12 [INFO] serf: EventMemberJoin: Node 15377 127.0.0.1
2016/03/17 06:40:12 [INFO] consul: adding LAN server Node 15377 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/03/17 06:40:12 [INFO] serf: EventMemberJoin: Node 15377.dc1 127.0.0.1
2016/03/17 06:40:12 [INFO] consul: adding WAN server Node 15377.dc1 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/03/17 06:40:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:12 [INFO] raft: Node at 127.0.0.1:15378 [Candidate] entering Candidate state
2016/03/17 06:40:12 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:12 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:12 [INFO] raft: Node at 127.0.0.1:15378 [Leader] entering Leader state
2016/03/17 06:40:12 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:12 [INFO] consul: New leader elected: Node 15377
2016/03/17 06:40:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:13 [DEBUG] raft: Node 127.0.0.1:15378 updated peer set (2): [127.0.0.1:15378]
2016/03/17 06:40:13 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:13 [INFO] consul: member 'Node 15377' joined, marking health alive
2016/03/17 06:40:15 [INFO] consul: shutting down server
2016/03/17 06:40:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:15 [WARN] serf: Shutdown without a Leave
--- PASS: TestInternal_NodeDump_FilterACL (3.48s)
=== RUN   TestInternal_EventFire_Token
2016/03/17 06:40:15 [INFO] raft: Node at 127.0.0.1:15382 [Follower] entering Follower state
2016/03/17 06:40:15 [INFO] serf: EventMemberJoin: Node 15381 127.0.0.1
2016/03/17 06:40:15 [INFO] consul: adding LAN server Node 15381 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/03/17 06:40:15 [INFO] serf: EventMemberJoin: Node 15381.dc1 127.0.0.1
2016/03/17 06:40:15 [INFO] consul: adding WAN server Node 15381.dc1 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/03/17 06:40:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:15 [INFO] raft: Node at 127.0.0.1:15382 [Candidate] entering Candidate state
2016/03/17 06:40:16 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:16 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:16 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:16 [INFO] raft: Node at 127.0.0.1:15382 [Leader] entering Leader state
2016/03/17 06:40:16 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:16 [INFO] consul: New leader elected: Node 15381
2016/03/17 06:40:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:16 [DEBUG] raft: Node 127.0.0.1:15382 updated peer set (2): [127.0.0.1:15382]
2016/03/17 06:40:16 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:17 [INFO] consul: member 'Node 15381' joined, marking health alive
2016/03/17 06:40:17 [WARN] consul: user event "foo" blocked by ACLs
2016/03/17 06:40:17 [DEBUG] consul: user event: foo
2016/03/17 06:40:17 [INFO] consul: shutting down server
2016/03/17 06:40:17 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:17 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:17 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:40:17 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_EventFire_Token (2.49s)
=== RUN   TestHealthCheckRace
--- PASS: TestHealthCheckRace (0.00s)
=== RUN   TestKVS_Apply
2016/03/17 06:40:18 [INFO] raft: Node at 127.0.0.1:15386 [Follower] entering Follower state
2016/03/17 06:40:18 [INFO] serf: EventMemberJoin: Node 15385 127.0.0.1
2016/03/17 06:40:18 [INFO] consul: adding LAN server Node 15385 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/03/17 06:40:18 [INFO] serf: EventMemberJoin: Node 15385.dc1 127.0.0.1
2016/03/17 06:40:18 [INFO] consul: adding WAN server Node 15385.dc1 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/03/17 06:40:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:18 [INFO] raft: Node at 127.0.0.1:15386 [Candidate] entering Candidate state
2016/03/17 06:40:18 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:18 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:18 [INFO] raft: Node at 127.0.0.1:15386 [Leader] entering Leader state
2016/03/17 06:40:18 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:18 [INFO] consul: New leader elected: Node 15385
2016/03/17 06:40:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:19 [DEBUG] raft: Node 127.0.0.1:15386 updated peer set (2): [127.0.0.1:15386]
2016/03/17 06:40:19 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:19 [INFO] consul: member 'Node 15385' joined, marking health alive
2016/03/17 06:40:20 [INFO] consul: shutting down server
2016/03/17 06:40:20 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:20 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:20 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Apply (2.61s)
=== RUN   TestKVS_Apply_ACLDeny
2016/03/17 06:40:20 [INFO] raft: Node at 127.0.0.1:15390 [Follower] entering Follower state
2016/03/17 06:40:20 [INFO] serf: EventMemberJoin: Node 15389 127.0.0.1
2016/03/17 06:40:20 [INFO] consul: adding LAN server Node 15389 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/03/17 06:40:20 [INFO] serf: EventMemberJoin: Node 15389.dc1 127.0.0.1
2016/03/17 06:40:20 [INFO] consul: adding WAN server Node 15389.dc1 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/03/17 06:40:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:20 [INFO] raft: Node at 127.0.0.1:15390 [Candidate] entering Candidate state
2016/03/17 06:40:21 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:21 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:21 [INFO] raft: Node at 127.0.0.1:15390 [Leader] entering Leader state
2016/03/17 06:40:21 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:21 [INFO] consul: New leader elected: Node 15389
2016/03/17 06:40:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:21 [DEBUG] raft: Node 127.0.0.1:15390 updated peer set (2): [127.0.0.1:15390]
2016/03/17 06:40:21 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:21 [INFO] consul: member 'Node 15389' joined, marking health alive
2016/03/17 06:40:22 [INFO] consul: shutting down server
2016/03/17 06:40:22 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:22 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:22 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Apply_ACLDeny (2.34s)
=== RUN   TestKVS_Get
2016/03/17 06:40:23 [INFO] raft: Node at 127.0.0.1:15394 [Follower] entering Follower state
2016/03/17 06:40:23 [INFO] serf: EventMemberJoin: Node 15393 127.0.0.1
2016/03/17 06:40:23 [INFO] consul: adding LAN server Node 15393 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/03/17 06:40:23 [INFO] serf: EventMemberJoin: Node 15393.dc1 127.0.0.1
2016/03/17 06:40:23 [INFO] consul: adding WAN server Node 15393.dc1 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/03/17 06:40:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:23 [INFO] raft: Node at 127.0.0.1:15394 [Candidate] entering Candidate state
2016/03/17 06:40:23 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:23 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:23 [INFO] raft: Node at 127.0.0.1:15394 [Leader] entering Leader state
2016/03/17 06:40:23 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:23 [INFO] consul: New leader elected: Node 15393
2016/03/17 06:40:24 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:24 [DEBUG] raft: Node 127.0.0.1:15394 updated peer set (2): [127.0.0.1:15394]
2016/03/17 06:40:24 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:24 [INFO] consul: member 'Node 15393' joined, marking health alive
2016/03/17 06:40:24 [INFO] consul: shutting down server
2016/03/17 06:40:24 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:24 [WARN] serf: Shutdown without a Leave
--- PASS: TestKVS_Get (2.30s)
=== RUN   TestKVS_Get_ACLDeny
2016/03/17 06:40:25 [INFO] raft: Node at 127.0.0.1:15398 [Follower] entering Follower state
2016/03/17 06:40:25 [INFO] serf: EventMemberJoin: Node 15397 127.0.0.1
2016/03/17 06:40:25 [INFO] consul: adding LAN server Node 15397 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/03/17 06:40:25 [INFO] serf: EventMemberJoin: Node 15397.dc1 127.0.0.1
2016/03/17 06:40:25 [INFO] consul: adding WAN server Node 15397.dc1 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/03/17 06:40:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:25 [INFO] raft: Node at 127.0.0.1:15398 [Candidate] entering Candidate state
2016/03/17 06:40:26 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:26 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:26 [INFO] raft: Node at 127.0.0.1:15398 [Leader] entering Leader state
2016/03/17 06:40:26 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:26 [INFO] consul: New leader elected: Node 15397
2016/03/17 06:40:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:26 [DEBUG] raft: Node 127.0.0.1:15398 updated peer set (2): [127.0.0.1:15398]
2016/03/17 06:40:26 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:26 [INFO] consul: member 'Node 15397' joined, marking health alive
2016/03/17 06:40:27 [INFO] consul: shutting down server
2016/03/17 06:40:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:27 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:27 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:40:27 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Get_ACLDeny (2.56s)
=== RUN   TestKVSEndpoint_List
2016/03/17 06:40:28 [INFO] serf: EventMemberJoin: Node 15401 127.0.0.1
2016/03/17 06:40:28 [INFO] raft: Node at 127.0.0.1:15402 [Follower] entering Follower state
2016/03/17 06:40:28 [INFO] consul: adding LAN server Node 15401 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/03/17 06:40:28 [INFO] serf: EventMemberJoin: Node 15401.dc1 127.0.0.1
2016/03/17 06:40:28 [INFO] consul: adding WAN server Node 15401.dc1 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/03/17 06:40:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:28 [INFO] raft: Node at 127.0.0.1:15402 [Candidate] entering Candidate state
2016/03/17 06:40:28 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:28 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:28 [INFO] raft: Node at 127.0.0.1:15402 [Leader] entering Leader state
2016/03/17 06:40:28 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:28 [INFO] consul: New leader elected: Node 15401
2016/03/17 06:40:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:28 [DEBUG] raft: Node 127.0.0.1:15402 updated peer set (2): [127.0.0.1:15402]
2016/03/17 06:40:29 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:29 [INFO] consul: member 'Node 15401' joined, marking health alive
2016/03/17 06:40:30 [INFO] consul: shutting down server
2016/03/17 06:40:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:30 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:30 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_List (3.04s)
=== RUN   TestKVSEndpoint_List_Blocking
2016/03/17 06:40:31 [INFO] raft: Node at 127.0.0.1:15406 [Follower] entering Follower state
2016/03/17 06:40:31 [INFO] serf: EventMemberJoin: Node 15405 127.0.0.1
2016/03/17 06:40:31 [INFO] consul: adding LAN server Node 15405 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/03/17 06:40:31 [INFO] serf: EventMemberJoin: Node 15405.dc1 127.0.0.1
2016/03/17 06:40:31 [INFO] consul: adding WAN server Node 15405.dc1 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/03/17 06:40:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:31 [INFO] raft: Node at 127.0.0.1:15406 [Candidate] entering Candidate state
2016/03/17 06:40:31 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:31 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:31 [INFO] raft: Node at 127.0.0.1:15406 [Leader] entering Leader state
2016/03/17 06:40:31 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:31 [INFO] consul: New leader elected: Node 15405
2016/03/17 06:40:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:31 [DEBUG] raft: Node 127.0.0.1:15406 updated peer set (2): [127.0.0.1:15406]
2016/03/17 06:40:32 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:32 [INFO] consul: member 'Node 15405' joined, marking health alive
2016/03/17 06:40:34 [INFO] consul: shutting down server
2016/03/17 06:40:34 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:34 [WARN] serf: Shutdown without a Leave
--- PASS: TestKVSEndpoint_List_Blocking (3.60s)
=== RUN   TestKVSEndpoint_List_ACLDeny
2016/03/17 06:40:34 [INFO] raft: Node at 127.0.0.1:15410 [Follower] entering Follower state
2016/03/17 06:40:34 [INFO] serf: EventMemberJoin: Node 15409 127.0.0.1
2016/03/17 06:40:34 [INFO] consul: adding LAN server Node 15409 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/03/17 06:40:34 [INFO] serf: EventMemberJoin: Node 15409.dc1 127.0.0.1
2016/03/17 06:40:34 [INFO] consul: adding WAN server Node 15409.dc1 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/03/17 06:40:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:34 [INFO] raft: Node at 127.0.0.1:15410 [Candidate] entering Candidate state
2016/03/17 06:40:35 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:35 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:35 [INFO] raft: Node at 127.0.0.1:15410 [Leader] entering Leader state
2016/03/17 06:40:35 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:35 [INFO] consul: New leader elected: Node 15409
2016/03/17 06:40:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:35 [DEBUG] raft: Node 127.0.0.1:15410 updated peer set (2): [127.0.0.1:15410]
2016/03/17 06:40:35 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:35 [INFO] consul: member 'Node 15409' joined, marking health alive
2016/03/17 06:40:38 [INFO] consul: shutting down server
2016/03/17 06:40:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:38 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_List_ACLDeny (4.22s)
=== RUN   TestKVSEndpoint_ListKeys
2016/03/17 06:40:39 [INFO] raft: Node at 127.0.0.1:15414 [Follower] entering Follower state
2016/03/17 06:40:39 [INFO] serf: EventMemberJoin: Node 15413 127.0.0.1
2016/03/17 06:40:39 [INFO] consul: adding LAN server Node 15413 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/03/17 06:40:39 [INFO] serf: EventMemberJoin: Node 15413.dc1 127.0.0.1
2016/03/17 06:40:39 [INFO] consul: adding WAN server Node 15413.dc1 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/03/17 06:40:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:39 [INFO] raft: Node at 127.0.0.1:15414 [Candidate] entering Candidate state
2016/03/17 06:40:39 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:39 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:39 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:39 [INFO] raft: Node at 127.0.0.1:15414 [Leader] entering Leader state
2016/03/17 06:40:39 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:39 [INFO] consul: New leader elected: Node 15413
2016/03/17 06:40:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:39 [DEBUG] raft: Node 127.0.0.1:15414 updated peer set (2): [127.0.0.1:15414]
2016/03/17 06:40:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:39 [INFO] consul: member 'Node 15413' joined, marking health alive
2016/03/17 06:40:41 [INFO] consul: shutting down server
2016/03/17 06:40:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:40:41 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_ListKeys (3.21s)
=== RUN   TestKVSEndpoint_ListKeys_ACLDeny
2016/03/17 06:40:42 [INFO] raft: Node at 127.0.0.1:15418 [Follower] entering Follower state
2016/03/17 06:40:42 [INFO] serf: EventMemberJoin: Node 15417 127.0.0.1
2016/03/17 06:40:42 [INFO] consul: adding LAN server Node 15417 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/03/17 06:40:42 [INFO] serf: EventMemberJoin: Node 15417.dc1 127.0.0.1
2016/03/17 06:40:42 [INFO] consul: adding WAN server Node 15417.dc1 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/03/17 06:40:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:42 [INFO] raft: Node at 127.0.0.1:15418 [Candidate] entering Candidate state
2016/03/17 06:40:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:42 [INFO] raft: Node at 127.0.0.1:15418 [Leader] entering Leader state
2016/03/17 06:40:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:42 [INFO] consul: New leader elected: Node 15417
2016/03/17 06:40:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:42 [DEBUG] raft: Node 127.0.0.1:15418 updated peer set (2): [127.0.0.1:15418]
2016/03/17 06:40:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:43 [INFO] consul: member 'Node 15417' joined, marking health alive
2016/03/17 06:40:46 [INFO] consul: shutting down server
2016/03/17 06:40:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_ListKeys_ACLDeny (5.03s)
=== RUN   TestKVS_Apply_LockDelay
2016/03/17 06:40:47 [INFO] raft: Node at 127.0.0.1:15422 [Follower] entering Follower state
2016/03/17 06:40:47 [INFO] serf: EventMemberJoin: Node 15421 127.0.0.1
2016/03/17 06:40:47 [INFO] consul: adding LAN server Node 15421 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/03/17 06:40:47 [INFO] serf: EventMemberJoin: Node 15421.dc1 127.0.0.1
2016/03/17 06:40:47 [INFO] consul: adding WAN server Node 15421.dc1 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/03/17 06:40:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:47 [INFO] raft: Node at 127.0.0.1:15422 [Candidate] entering Candidate state
2016/03/17 06:40:47 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:47 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:47 [INFO] raft: Node at 127.0.0.1:15422 [Leader] entering Leader state
2016/03/17 06:40:47 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:47 [INFO] consul: New leader elected: Node 15421
2016/03/17 06:40:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:48 [DEBUG] raft: Node 127.0.0.1:15422 updated peer set (2): [127.0.0.1:15422]
2016/03/17 06:40:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:48 [INFO] consul: member 'Node 15421' joined, marking health alive
2016/03/17 06:40:48 [WARN] consul.kvs: Rejecting lock of test due to lock-delay until 2016-03-17 06:40:48.319631937 +0000 UTC
2016/03/17 06:40:48 [INFO] consul: shutting down server
2016/03/17 06:40:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:48 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Apply_LockDelay (2.09s)
=== RUN   TestLeader_RegisterMember
2016/03/17 06:40:49 [INFO] raft: Node at 127.0.0.1:15426 [Follower] entering Follower state
2016/03/17 06:40:49 [INFO] serf: EventMemberJoin: Node 15425 127.0.0.1
2016/03/17 06:40:49 [INFO] consul: adding LAN server Node 15425 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/17 06:40:49 [INFO] serf: EventMemberJoin: Node 15425.dc1 127.0.0.1
2016/03/17 06:40:49 [INFO] consul: adding WAN server Node 15425.dc1 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/17 06:40:49 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:49 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15427
2016/03/17 06:40:49 [DEBUG] memberlist: TCP connection from=127.0.0.1:54651
2016/03/17 06:40:49 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:49 [INFO] serf: EventMemberJoin: Node 15425 127.0.0.1
2016/03/17 06:40:49 [INFO] consul: adding server Node 15425 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/03/17 06:40:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:49 [INFO] raft: Node at 127.0.0.1:15426 [Candidate] entering Candidate state
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:49 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:49 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:49 [INFO] raft: Node at 127.0.0.1:15426 [Leader] entering Leader state
2016/03/17 06:40:49 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:49 [INFO] consul: New leader elected: Node 15425
2016/03/17 06:40:49 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:49 [INFO] consul: New leader elected: Node 15425
2016/03/17 06:40:49 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:50 [DEBUG] raft: Node 127.0.0.1:15426 updated peer set (2): [127.0.0.1:15426]
2016/03/17 06:40:50 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:50 [INFO] consul: member 'Node 15425' joined, marking health alive
2016/03/17 06:40:50 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:50 [INFO] consul: shutting down client
2016/03/17 06:40:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:50 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:50 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:50 [INFO] consul: shutting down server
2016/03/17 06:40:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:51 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/17 06:40:51 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/17 06:40:51 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:40:51 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_RegisterMember (2.42s)
=== RUN   TestLeader_FailedMember
2016/03/17 06:40:51 [INFO] raft: Node at 127.0.0.1:15432 [Follower] entering Follower state
2016/03/17 06:40:51 [INFO] serf: EventMemberJoin: Node 15431 127.0.0.1
2016/03/17 06:40:51 [INFO] consul: adding LAN server Node 15431 (Addr: 127.0.0.1:15432) (DC: dc1)
2016/03/17 06:40:51 [INFO] serf: EventMemberJoin: Node 15431.dc1 127.0.0.1
2016/03/17 06:40:51 [INFO] consul: adding WAN server Node 15431.dc1 (Addr: 127.0.0.1:15432) (DC: dc1)
2016/03/17 06:40:51 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:51 [INFO] raft: Node at 127.0.0.1:15432 [Candidate] entering Candidate state
2016/03/17 06:40:52 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:52 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:52 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:52 [INFO] raft: Node at 127.0.0.1:15432 [Leader] entering Leader state
2016/03/17 06:40:52 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:52 [INFO] consul: New leader elected: Node 15431
2016/03/17 06:40:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:52 [DEBUG] raft: Node 127.0.0.1:15432 updated peer set (2): [127.0.0.1:15432]
2016/03/17 06:40:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:52 [INFO] consul: member 'Node 15431' joined, marking health alive
2016/03/17 06:40:52 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15433
2016/03/17 06:40:52 [DEBUG] memberlist: TCP connection from=127.0.0.1:48058
2016/03/17 06:40:52 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:52 [INFO] serf: EventMemberJoin: Node 15431 127.0.0.1
2016/03/17 06:40:52 [INFO] consul: shutting down client
2016/03/17 06:40:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:52 [INFO] consul: adding server Node 15431 (Addr: 127.0.0.1:15432) (DC: dc1)
2016/03/17 06:40:52 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:52 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:53 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:53 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:53 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:53 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/17 06:40:53 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/17 06:40:53 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:53 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:53 [INFO] consul: member 'testco.internal' failed, marking health critical
2016/03/17 06:40:53 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:53 [INFO] consul: shutting down client
2016/03/17 06:40:53 [INFO] consul: shutting down server
2016/03/17 06:40:53 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:53 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:53 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:40:53 [ERR] consul: failed to reconcile member: {testco.internal 127.0.0.1 15436 map[vsn_min:1 vsn_max:3 build: role:node dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:40:53 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_FailedMember (2.64s)
=== RUN   TestLeader_LeftMember
2016/03/17 06:40:54 [INFO] raft: Node at 127.0.0.1:15438 [Follower] entering Follower state
2016/03/17 06:40:54 [INFO] serf: EventMemberJoin: Node 15437 127.0.0.1
2016/03/17 06:40:54 [INFO] consul: adding LAN server Node 15437 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/17 06:40:54 [INFO] serf: EventMemberJoin: Node 15437.dc1 127.0.0.1
2016/03/17 06:40:54 [INFO] consul: adding WAN server Node 15437.dc1 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/17 06:40:54 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15439
2016/03/17 06:40:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:58502
2016/03/17 06:40:54 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:54 [INFO] serf: EventMemberJoin: Node 15437 127.0.0.1
2016/03/17 06:40:54 [INFO] consul: adding server Node 15437 (Addr: 127.0.0.1:15438) (DC: dc1)
2016/03/17 06:40:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:54 [INFO] raft: Node at 127.0.0.1:15438 [Candidate] entering Candidate state
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:54 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:54 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:54 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:54 [INFO] raft: Node at 127.0.0.1:15438 [Leader] entering Leader state
2016/03/17 06:40:54 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:54 [INFO] consul: New leader elected: Node 15437
2016/03/17 06:40:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:54 [INFO] consul: New leader elected: Node 15437
2016/03/17 06:40:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:55 [DEBUG] raft: Node 127.0.0.1:15438 updated peer set (2): [127.0.0.1:15438]
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:55 [INFO] consul: member 'Node 15437' joined, marking health alive
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:55 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:55 [INFO] consul: client starting leave
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [DEBUG] serf: messageLeaveType: testco.internal
2016/03/17 06:40:55 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/03/17 06:40:55 [INFO] consul: member 'testco.internal' left, deregistering
2016/03/17 06:40:56 [INFO] consul: shutting down client
2016/03/17 06:40:56 [INFO] consul: shutting down client
2016/03/17 06:40:56 [INFO] consul: shutting down server
2016/03/17 06:40:56 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:56 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:56 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_LeftMember (2.72s)
=== RUN   TestLeader_ReapMember
2016/03/17 06:40:57 [INFO] raft: Node at 127.0.0.1:15444 [Follower] entering Follower state
2016/03/17 06:40:57 [INFO] serf: EventMemberJoin: Node 15443 127.0.0.1
2016/03/17 06:40:57 [INFO] consul: adding LAN server Node 15443 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/17 06:40:57 [INFO] serf: EventMemberJoin: Node 15443.dc1 127.0.0.1
2016/03/17 06:40:57 [INFO] consul: adding WAN server Node 15443.dc1 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/17 06:40:57 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:57 [DEBUG] memberlist: TCP connection from=127.0.0.1:56263
2016/03/17 06:40:57 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15445
2016/03/17 06:40:57 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:40:57 [INFO] serf: EventMemberJoin: Node 15443 127.0.0.1
2016/03/17 06:40:57 [INFO] consul: adding server Node 15443 (Addr: 127.0.0.1:15444) (DC: dc1)
2016/03/17 06:40:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:40:57 [INFO] raft: Node at 127.0.0.1:15444 [Candidate] entering Candidate state
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:40:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:40:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:40:57 [INFO] raft: Election won. Tally: 1
2016/03/17 06:40:57 [INFO] raft: Node at 127.0.0.1:15444 [Leader] entering Leader state
2016/03/17 06:40:57 [INFO] consul: cluster leadership acquired
2016/03/17 06:40:57 [INFO] consul: New leader elected: Node 15443
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [INFO] consul: New leader elected: Node 15443
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:57 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:40:58 [DEBUG] raft: Node 127.0.0.1:15444 updated peer set (2): [127.0.0.1:15444]
2016/03/17 06:40:58 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:40:58 [INFO] consul: member 'Node 15443' joined, marking health alive
2016/03/17 06:40:58 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:59 [INFO] consul: member 'testco.internal' reaped, deregistering
2016/03/17 06:40:59 [INFO] consul: shutting down client
2016/03/17 06:40:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:59 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:59 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:59 [INFO] consul: shutting down server
2016/03/17 06:40:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:59 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/03/17 06:40:59 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/03/17 06:40:59 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:40:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:40:59 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/03/17 06:40:59 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/03/17 06:41:00 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:41:00 [ERR] consul: failed to reconcile member: {testco.internal 127.0.0.1 15448 map[vsn:2 vsn_min:1 vsn_max:3 build: role:node dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:41:00 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestLeader_ReapMember (3.50s)
=== RUN   TestLeader_Reconcile_ReapMember
2016/03/17 06:41:00 [INFO] raft: Node at 127.0.0.1:15450 [Follower] entering Follower state
2016/03/17 06:41:00 [INFO] serf: EventMemberJoin: Node 15449 127.0.0.1
2016/03/17 06:41:00 [INFO] consul: adding LAN server Node 15449 (Addr: 127.0.0.1:15450) (DC: dc1)
2016/03/17 06:41:00 [INFO] serf: EventMemberJoin: Node 15449.dc1 127.0.0.1
2016/03/17 06:41:00 [INFO] consul: adding WAN server Node 15449.dc1 (Addr: 127.0.0.1:15450) (DC: dc1)
2016/03/17 06:41:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:00 [INFO] raft: Node at 127.0.0.1:15450 [Candidate] entering Candidate state
2016/03/17 06:41:01 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:01 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:01 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:01 [INFO] raft: Node at 127.0.0.1:15450 [Leader] entering Leader state
2016/03/17 06:41:01 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:01 [INFO] consul: New leader elected: Node 15449
2016/03/17 06:41:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:01 [DEBUG] raft: Node 127.0.0.1:15450 updated peer set (2): [127.0.0.1:15450]
2016/03/17 06:41:01 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:01 [INFO] consul: member 'Node 15449' joined, marking health alive
2016/03/17 06:41:02 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/03/17 06:41:02 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/03/17 06:41:02 [INFO] consul: shutting down server
2016/03/17 06:41:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:02 [ERR] consul.catalog: Deregister failed: leadership lost while committing log
2016/03/17 06:41:02 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestLeader_Reconcile_ReapMember (2.69s)
=== RUN   TestLeader_Reconcile
2016/03/17 06:41:03 [INFO] raft: Node at 127.0.0.1:15454 [Follower] entering Follower state
2016/03/17 06:41:03 [INFO] serf: EventMemberJoin: Node 15453 127.0.0.1
2016/03/17 06:41:03 [INFO] consul: adding LAN server Node 15453 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/17 06:41:03 [INFO] serf: EventMemberJoin: Node 15453.dc1 127.0.0.1
2016/03/17 06:41:03 [INFO] consul: adding WAN server Node 15453.dc1 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/17 06:41:03 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:41:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:32965
2016/03/17 06:41:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15455
2016/03/17 06:41:03 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/03/17 06:41:03 [INFO] serf: EventMemberJoin: Node 15453 127.0.0.1
2016/03/17 06:41:03 [INFO] consul: adding server Node 15453 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/03/17 06:41:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:03 [INFO] raft: Node at 127.0.0.1:15454 [Candidate] entering Candidate state
2016/03/17 06:41:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:03 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] serf: messageJoinType: testco.internal
2016/03/17 06:41:04 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:04 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:04 [INFO] raft: Node at 127.0.0.1:15454 [Leader] entering Leader state
2016/03/17 06:41:04 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:04 [INFO] consul: New leader elected: Node 15453
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [INFO] consul: New leader elected: Node 15453
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:04 [DEBUG] raft: Node 127.0.0.1:15454 updated peer set (2): [127.0.0.1:15454]
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:04 [INFO] consul: member 'Node 15453' joined, marking health alive
2016/03/17 06:41:04 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/03/17 06:41:05 [INFO] consul: shutting down client
2016/03/17 06:41:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:05 [INFO] consul: shutting down server
2016/03/17 06:41:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_Reconcile (2.72s)
=== RUN   TestLeader_LeftServer
2016/03/17 06:41:05 [INFO] raft: Node at 127.0.0.1:15460 [Follower] entering Follower state
2016/03/17 06:41:06 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/17 06:41:06 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/17 06:41:06 [INFO] serf: EventMemberJoin: Node 15459.dc1 127.0.0.1
2016/03/17 06:41:06 [INFO] consul: adding WAN server Node 15459.dc1 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/17 06:41:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:06 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:06 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/03/17 06:41:06 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/17 06:41:06 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/17 06:41:06 [INFO] serf: EventMemberJoin: Node 15463.dc1 127.0.0.1
2016/03/17 06:41:06 [INFO] consul: adding WAN server Node 15463.dc1 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/17 06:41:06 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:06 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:06 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:06 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:06 [INFO] raft: Node at 127.0.0.1:15460 [Leader] entering Leader state
2016/03/17 06:41:06 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:06 [INFO] consul: New leader elected: Node 15459
2016/03/17 06:41:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:06 [DEBUG] raft: Node 127.0.0.1:15460 updated peer set (2): [127.0.0.1:15460]
2016/03/17 06:41:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:07 [INFO] consul: member 'Node 15459' joined, marking health alive
2016/03/17 06:41:07 [INFO] raft: Node at 127.0.0.1:15468 [Follower] entering Follower state
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15467.dc1 127.0.0.1
2016/03/17 06:41:07 [INFO] consul: adding WAN server Node 15467.dc1 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/17 06:41:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15461
2016/03/17 06:41:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:49416
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/17 06:41:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15461
2016/03/17 06:41:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:49417
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15459 127.0.0.1
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15459 (Addr: 127.0.0.1:15460) (DC: dc1)
2016/03/17 06:41:07 [DEBUG] raft: Node 127.0.0.1:15460 updated peer set (2): [127.0.0.1:15464 127.0.0.1:15460]
2016/03/17 06:41:07 [INFO] raft: Added peer 127.0.0.1:15464, starting replication
2016/03/17 06:41:07 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:39936
2016/03/17 06:41:07 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:07 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:39937
2016/03/17 06:41:07 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/03/17 06:41:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:07 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/17 06:41:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15467
2016/03/17 06:41:07 [DEBUG] serf: messageJoinType: Node 15463
2016/03/17 06:41:07 [DEBUG] raft: Failed to contact 127.0.0.1:15464 in 164.054667ms
2016/03/17 06:41:07 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:41:07 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:41:07 [INFO] raft: Node at 127.0.0.1:15460 [Follower] entering Follower state
2016/03/17 06:41:07 [INFO] consul: cluster leadership lost
2016/03/17 06:41:07 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:41:07 [WARN] raft: AppendEntries to 127.0.0.1:15464 rejected, sending older logs (next: 1)
2016/03/17 06:41:07 [ERR] consul: failed to reconcile member: {Node 15463 127.0.0.1 15465 map[port:15464 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:41:07 [ERR] consul: failed to add raft peer: node is not the leader
2016/03/17 06:41:07 [ERR] consul: failed to reconcile member: {Node 15467 127.0.0.1 15469 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15468] alive 1 3 2 2 4 4}: node is not the leader
2016/03/17 06:41:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:07 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:08 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:39938
2016/03/17 06:41:08 [DEBUG] raft: Node 127.0.0.1:15464 updated peer set (2): [127.0.0.1:15460]
2016/03/17 06:41:08 [WARN] raft: Rejecting vote from 127.0.0.1:15460 since we have a leader: 127.0.0.1:15460
2016/03/17 06:41:08 [INFO] raft: pipelining replication to peer 127.0.0.1:15464
2016/03/17 06:41:08 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15464
2016/03/17 06:41:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:08 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/17 06:41:08 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:08 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:08 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:08 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:08 [DEBUG] raft-net: 127.0.0.1:15460 accepted connection from: 127.0.0.1:43239
2016/03/17 06:41:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:09 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:09 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:10 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/03/17 06:41:10 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:10 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:10 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:10 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:10 [DEBUG] memberlist: Potential blocking operation. Last command took 11.143667ms
2016/03/17 06:41:11 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:11 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:11 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:11 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:12 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:12 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:12 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:13 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:13 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:13 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:13 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/17 06:41:14 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:14 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:14 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:41:14 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:14 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:14 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:14 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:14 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:41:15 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:15 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:15 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:15 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:15 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/03/17 06:41:16 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:16 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:16 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:16 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:16 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/17 06:41:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:17 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:17 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:17 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:17 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/17 06:41:18 [INFO] consul: shutting down server
2016/03/17 06:41:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/17 06:41:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/17 06:41:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:18 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:41:18 [INFO] consul: shutting down server
2016/03/17 06:41:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/03/17 06:41:18 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:18 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/03/17 06:41:18 [INFO] memberlist: Marking Node 15467 as failed, suspect timeout reached
2016/03/17 06:41:18 [INFO] serf: EventMemberFailed: Node 15467 127.0.0.1
2016/03/17 06:41:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:18 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:41:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15463 (timeout reached)
2016/03/17 06:41:18 [INFO] memberlist: Marking Node 15467 as failed, suspect timeout reached
2016/03/17 06:41:18 [INFO] serf: EventMemberFailed: Node 15467 127.0.0.1
2016/03/17 06:41:18 [INFO] consul: removing LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/03/17 06:41:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:18 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:18 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15463 has failed, no acks received
2016/03/17 06:41:18 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:18 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15464: EOF
2016/03/17 06:41:18 [DEBUG] memberlist: Failed UDP ping: Node 15463 (timeout reached)
2016/03/17 06:41:18 [INFO] memberlist: Suspect Node 15463 has failed, no acks received
2016/03/17 06:41:18 [INFO] memberlist: Marking Node 15463 as failed, suspect timeout reached
2016/03/17 06:41:18 [INFO] serf: EventMemberFailed: Node 15463 127.0.0.1
2016/03/17 06:41:18 [INFO] consul: removing LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/03/17 06:41:19 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:19 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:41:19 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:19 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:19 [INFO] raft: Node at 127.0.0.1:15460 [Candidate] entering Candidate state
2016/03/17 06:41:19 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:19 [INFO] consul: shutting down server
2016/03/17 06:41:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:19 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:19 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15464: EOF
2016/03/17 06:41:20 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftServer (14.71s)
	leader_test.go:347: should have 3 peers
=== RUN   TestLeader_LeftLeader
2016/03/17 06:41:20 [INFO] raft: Node at 127.0.0.1:15472 [Follower] entering Follower state
2016/03/17 06:41:20 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/17 06:41:20 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/17 06:41:20 [INFO] serf: EventMemberJoin: Node 15471.dc1 127.0.0.1
2016/03/17 06:41:20 [INFO] consul: adding WAN server Node 15471.dc1 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/17 06:41:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:20 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:21 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:21 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:21 [INFO] raft: Node at 127.0.0.1:15472 [Leader] entering Leader state
2016/03/17 06:41:21 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:21 [INFO] consul: New leader elected: Node 15471
2016/03/17 06:41:21 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/03/17 06:41:21 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/17 06:41:21 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/17 06:41:21 [INFO] serf: EventMemberJoin: Node 15475.dc1 127.0.0.1
2016/03/17 06:41:21 [INFO] consul: adding WAN server Node 15475.dc1 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/17 06:41:21 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:21 [DEBUG] raft: Node 127.0.0.1:15472 updated peer set (2): [127.0.0.1:15472]
2016/03/17 06:41:21 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:21 [INFO] consul: member 'Node 15471' joined, marking health alive
2016/03/17 06:41:22 [INFO] raft: Node at 127.0.0.1:15480 [Follower] entering Follower state
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15479.dc1 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding WAN server Node 15479.dc1 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/17 06:41:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15473
2016/03/17 06:41:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:60186
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/17 06:41:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15473
2016/03/17 06:41:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:60187
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/03/17 06:41:22 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/03/17 06:41:22 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/17 06:41:22 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:22 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:22 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] raft: Node 127.0.0.1:15472 updated peer set (2): [127.0.0.1:15476 127.0.0.1:15472]
2016/03/17 06:41:22 [INFO] raft: Added peer 127.0.0.1:15476, starting replication
2016/03/17 06:41:22 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:58472
2016/03/17 06:41:22 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:58473
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15479
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] serf: messageJoinType: Node 15475
2016/03/17 06:41:22 [DEBUG] raft: Failed to contact 127.0.0.1:15476 in 257.084ms
2016/03/17 06:41:22 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:41:22 [INFO] raft: Node at 127.0.0.1:15472 [Follower] entering Follower state
2016/03/17 06:41:22 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:41:22 [ERR] consul: failed to reconcile member: {Node 15475 127.0.0.1 15477 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15476] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:41:22 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:41:22 [INFO] consul: cluster leadership lost
2016/03/17 06:41:22 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/03/17 06:41:22 [WARN] raft: AppendEntries to 127.0.0.1:15476 rejected, sending older logs (next: 1)
2016/03/17 06:41:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:22 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:23 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:58474
2016/03/17 06:41:23 [DEBUG] raft: Node 127.0.0.1:15476 updated peer set (2): [127.0.0.1:15472]
2016/03/17 06:41:23 [WARN] raft: Rejecting vote from 127.0.0.1:15472 since we have a leader: 127.0.0.1:15472
2016/03/17 06:41:23 [INFO] raft: pipelining replication to peer 127.0.0.1:15476
2016/03/17 06:41:23 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15476
2016/03/17 06:41:23 [INFO] raft: pipelining replication to peer 127.0.0.1:15476
2016/03/17 06:41:23 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15476
2016/03/17 06:41:23 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:23 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:23 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:23 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:24 [DEBUG] raft-net: 127.0.0.1:15472 accepted connection from: 127.0.0.1:51891
2016/03/17 06:41:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:24 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:24 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:25 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:58476
2016/03/17 06:41:25 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/03/17 06:41:25 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:25 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:25 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:25 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:26 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:26 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:26 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:26 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:26 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:27 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:41:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:27 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:28 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/03/17 06:41:28 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:28 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:28 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:29 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:29 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:29 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:29 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:29 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:30 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:41:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:30 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:30 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:41:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:30 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:31 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:31 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:41:31 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:31 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:31 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:31 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:41:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:31 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:31 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:32 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:32 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:32 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:32 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:32 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:32 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:32 [INFO] consul: shutting down server
2016/03/17 06:41:32 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:33 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:33 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:33 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:41:33 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:33 [INFO] consul: shutting down server
2016/03/17 06:41:33 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:33 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:33 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:33 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/03/17 06:41:33 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/03/17 06:41:33 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/03/17 06:41:33 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:33 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:33 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:41:33 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:33 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/03/17 06:41:33 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:33 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/03/17 06:41:33 [DEBUG] memberlist: Failed UDP ping: Node 15475 (timeout reached)
2016/03/17 06:41:33 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:33 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15476: EOF
2016/03/17 06:41:33 [INFO] memberlist: Marking Node 15479 as failed, suspect timeout reached
2016/03/17 06:41:33 [INFO] serf: EventMemberFailed: Node 15479 127.0.0.1
2016/03/17 06:41:33 [INFO] memberlist: Suspect Node 15475 has failed, no acks received
2016/03/17 06:41:33 [INFO] consul: removing LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/03/17 06:41:33 [DEBUG] memberlist: Failed UDP ping: Node 15475 (timeout reached)
2016/03/17 06:41:33 [INFO] memberlist: Suspect Node 15475 has failed, no acks received
2016/03/17 06:41:33 [INFO] memberlist: Marking Node 15475 as failed, suspect timeout reached
2016/03/17 06:41:33 [INFO] serf: EventMemberFailed: Node 15475 127.0.0.1
2016/03/17 06:41:33 [INFO] consul: removing LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/03/17 06:41:34 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:34 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:41:34 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:34 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:34 [INFO] raft: Node at 127.0.0.1:15472 [Candidate] entering Candidate state
2016/03/17 06:41:34 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:34 [INFO] consul: shutting down server
2016/03/17 06:41:34 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:34 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:34 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15476: EOF
2016/03/17 06:41:34 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:35 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftLeader (14.75s)
	leader_test.go:400: should have 3 peers
=== RUN   TestLeader_MultiBootstrap
2016/03/17 06:41:35 [INFO] raft: Node at 127.0.0.1:15484 [Follower] entering Follower state
2016/03/17 06:41:35 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/03/17 06:41:35 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/17 06:41:35 [INFO] serf: EventMemberJoin: Node 15483.dc1 127.0.0.1
2016/03/17 06:41:35 [INFO] consul: adding WAN server Node 15483.dc1 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/17 06:41:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:35 [INFO] raft: Node at 127.0.0.1:15484 [Candidate] entering Candidate state
2016/03/17 06:41:36 [INFO] raft: Node at 127.0.0.1:15488 [Follower] entering Follower state
2016/03/17 06:41:36 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/03/17 06:41:36 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/17 06:41:36 [INFO] serf: EventMemberJoin: Node 15487.dc1 127.0.0.1
2016/03/17 06:41:36 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15485
2016/03/17 06:41:36 [DEBUG] memberlist: TCP connection from=127.0.0.1:43651
2016/03/17 06:41:36 [INFO] consul: adding WAN server Node 15487.dc1 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/17 06:41:36 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/03/17 06:41:36 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/03/17 06:41:36 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/17 06:41:36 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/03/17 06:41:36 [INFO] consul: shutting down server
2016/03/17 06:41:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:36 [INFO] raft: Node at 127.0.0.1:15488 [Candidate] entering Candidate state
2016/03/17 06:41:36 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:36 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:36 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:36 [INFO] raft: Node at 127.0.0.1:15484 [Leader] entering Leader state
2016/03/17 06:41:36 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:36 [INFO] consul: New leader elected: Node 15483
2016/03/17 06:41:36 [DEBUG] memberlist: Failed UDP ping: Node 15487 (timeout reached)
2016/03/17 06:41:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:36 [INFO] memberlist: Suspect Node 15487 has failed, no acks received
2016/03/17 06:41:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:36 [DEBUG] raft: Node 127.0.0.1:15484 updated peer set (2): [127.0.0.1:15484]
2016/03/17 06:41:36 [DEBUG] memberlist: Failed UDP ping: Node 15487 (timeout reached)
2016/03/17 06:41:36 [INFO] memberlist: Suspect Node 15487 has failed, no acks received
2016/03/17 06:41:36 [INFO] memberlist: Marking Node 15487 as failed, suspect timeout reached
2016/03/17 06:41:36 [INFO] serf: EventMemberFailed: Node 15487 127.0.0.1
2016/03/17 06:41:36 [INFO] consul: removing LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/03/17 06:41:36 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:36 [INFO] consul: member 'Node 15487' failed, marking health critical
2016/03/17 06:41:37 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:37 [INFO] consul: shutting down server
2016/03/17 06:41:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:37 [INFO] consul: member 'Node 15483' joined, marking health alive
2016/03/17 06:41:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:37 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:41:37 [ERR] consul: failed to reconcile member: {Node 15483 127.0.0.1 15485 map[vsn_max:3 build: port:15484 bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:41:37 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:41:37 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_MultiBootstrap (2.37s)
=== RUN   TestLeader_TombstoneGC_Reset
2016/03/17 06:41:37 [INFO] raft: Node at 127.0.0.1:15492 [Follower] entering Follower state
2016/03/17 06:41:37 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/17 06:41:37 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/17 06:41:37 [INFO] serf: EventMemberJoin: Node 15491.dc1 127.0.0.1
2016/03/17 06:41:37 [INFO] consul: adding WAN server Node 15491.dc1 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/17 06:41:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:37 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:38 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/17 06:41:38 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/17 06:41:38 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/17 06:41:38 [INFO] serf: EventMemberJoin: Node 15495.dc1 127.0.0.1
2016/03/17 06:41:38 [INFO] consul: adding WAN server Node 15495.dc1 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/17 06:41:38 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:38 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:38 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:38 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:38 [INFO] raft: Node at 127.0.0.1:15492 [Leader] entering Leader state
2016/03/17 06:41:38 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:38 [INFO] consul: New leader elected: Node 15491
2016/03/17 06:41:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:39 [DEBUG] raft: Node 127.0.0.1:15492 updated peer set (2): [127.0.0.1:15492]
2016/03/17 06:41:39 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:39 [INFO] consul: member 'Node 15491' joined, marking health alive
2016/03/17 06:41:39 [INFO] raft: Node at 127.0.0.1:15500 [Follower] entering Follower state
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15499.dc1 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding WAN server Node 15499.dc1 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/17 06:41:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15493
2016/03/17 06:41:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:58039
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/17 06:41:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:58040
2016/03/17 06:41:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15493
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/03/17 06:41:39 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/03/17 06:41:39 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/17 06:41:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15499
2016/03/17 06:41:39 [DEBUG] serf: messageJoinType: Node 15495
2016/03/17 06:41:39 [DEBUG] raft: Node 127.0.0.1:15492 updated peer set (2): [127.0.0.1:15496 127.0.0.1:15492]
2016/03/17 06:41:39 [INFO] raft: Added peer 127.0.0.1:15496, starting replication
2016/03/17 06:41:39 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:52976
2016/03/17 06:41:39 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:52977
2016/03/17 06:41:40 [DEBUG] raft: Failed to contact 127.0.0.1:15496 in 240.264333ms
2016/03/17 06:41:40 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:41:40 [INFO] raft: Node at 127.0.0.1:15492 [Follower] entering Follower state
2016/03/17 06:41:40 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:41:40 [INFO] consul: cluster leadership lost
2016/03/17 06:41:40 [ERR] consul: failed to reconcile member: {Node 15495 127.0.0.1 15497 map[vsn_min:1 vsn_max:3 build: port:15496 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:41:40 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:41:40 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:41:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:40 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:40 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/03/17 06:41:40 [WARN] raft: AppendEntries to 127.0.0.1:15496 rejected, sending older logs (next: 1)
2016/03/17 06:41:40 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:52978
2016/03/17 06:41:40 [DEBUG] raft: Node 127.0.0.1:15496 updated peer set (2): [127.0.0.1:15492]
2016/03/17 06:41:40 [WARN] raft: Rejecting vote from 127.0.0.1:15492 since we have a leader: 127.0.0.1:15492
2016/03/17 06:41:40 [INFO] raft: pipelining replication to peer 127.0.0.1:15496
2016/03/17 06:41:40 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15496
2016/03/17 06:41:41 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:41 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:41 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:41 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:41 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/17 06:41:41 [DEBUG] raft-net: 127.0.0.1:15492 accepted connection from: 127.0.0.1:34512
2016/03/17 06:41:41 [DEBUG] memberlist: Potential blocking operation. Last command took 10.840334ms
2016/03/17 06:41:42 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:42 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:42 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:42 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:42 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:42 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/17 06:41:42 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:42 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:43 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/17 06:41:43 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:43 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:43 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:43 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:44 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:44 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:41:44 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:44 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:44 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:44 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:45 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/17 06:41:45 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:45 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:45 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:45 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:46 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/17 06:41:46 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:46 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:41:46 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:46 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:46 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:46 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:46 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:41:47 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:47 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:47 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:47 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/17 06:41:48 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:48 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:48 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:48 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:48 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/03/17 06:41:49 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:49 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:49 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:49 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:49 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:49 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:41:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:49 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:49 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:49 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:50 [INFO] consul: shutting down server
2016/03/17 06:41:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/17 06:41:50 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/03/17 06:41:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/03/17 06:41:50 [INFO] consul: shutting down server
2016/03/17 06:41:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/03/17 06:41:50 [INFO] memberlist: Marking Node 15499 as failed, suspect timeout reached
2016/03/17 06:41:50 [INFO] serf: EventMemberFailed: Node 15499 127.0.0.1
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15495 (timeout reached)
2016/03/17 06:41:50 [INFO] memberlist: Marking Node 15499 as failed, suspect timeout reached
2016/03/17 06:41:50 [INFO] serf: EventMemberFailed: Node 15499 127.0.0.1
2016/03/17 06:41:50 [INFO] consul: removing LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15495 has failed, no acks received
2016/03/17 06:41:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:50 [DEBUG] raft: Votes needed: 2
2016/03/17 06:41:50 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:50 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:50 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15496: EOF
2016/03/17 06:41:50 [DEBUG] memberlist: Failed UDP ping: Node 15495 (timeout reached)
2016/03/17 06:41:50 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:41:50 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/03/17 06:41:50 [INFO] memberlist: Suspect Node 15495 has failed, no acks received
2016/03/17 06:41:50 [INFO] memberlist: Marking Node 15495 as failed, suspect timeout reached
2016/03/17 06:41:50 [INFO] serf: EventMemberFailed: Node 15495 127.0.0.1
2016/03/17 06:41:50 [INFO] consul: removing LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/03/17 06:41:51 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:41:51 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15496: EOF
2016/03/17 06:41:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:51 [INFO] consul: shutting down server
2016/03/17 06:41:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:51 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_TombstoneGC_Reset (14.16s)
	leader_test.go:511: should have 3 peers
=== RUN   TestLeader_ReapTombstones
2016/03/17 06:41:51 [INFO] raft: Node at 127.0.0.1:15504 [Follower] entering Follower state
2016/03/17 06:41:51 [INFO] serf: EventMemberJoin: Node 15503 127.0.0.1
2016/03/17 06:41:51 [INFO] consul: adding LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/03/17 06:41:51 [INFO] serf: EventMemberJoin: Node 15503.dc1 127.0.0.1
2016/03/17 06:41:51 [INFO] consul: adding WAN server Node 15503.dc1 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/03/17 06:41:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:52 [INFO] raft: Node at 127.0.0.1:15504 [Candidate] entering Candidate state
2016/03/17 06:41:52 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:52 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:52 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:52 [INFO] raft: Node at 127.0.0.1:15504 [Leader] entering Leader state
2016/03/17 06:41:52 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:52 [INFO] consul: New leader elected: Node 15503
2016/03/17 06:41:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:52 [DEBUG] raft: Node 127.0.0.1:15504 updated peer set (2): [127.0.0.1:15504]
2016/03/17 06:41:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:52 [INFO] consul: member 'Node 15503' joined, marking health alive
2016/03/17 06:41:54 [INFO] consul: shutting down server
2016/03/17 06:41:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_ReapTombstones (3.26s)
=== RUN   TestPreparedQuery_Apply
2016/03/17 06:41:55 [INFO] raft: Node at 127.0.0.1:15508 [Follower] entering Follower state
2016/03/17 06:41:55 [INFO] serf: EventMemberJoin: Node 15507 127.0.0.1
2016/03/17 06:41:55 [INFO] consul: adding LAN server Node 15507 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/03/17 06:41:55 [INFO] serf: EventMemberJoin: Node 15507.dc1 127.0.0.1
2016/03/17 06:41:55 [INFO] consul: adding WAN server Node 15507.dc1 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/03/17 06:41:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:41:55 [INFO] raft: Node at 127.0.0.1:15508 [Candidate] entering Candidate state
2016/03/17 06:41:55 [DEBUG] raft: Votes needed: 1
2016/03/17 06:41:55 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:41:55 [INFO] raft: Election won. Tally: 1
2016/03/17 06:41:55 [INFO] raft: Node at 127.0.0.1:15508 [Leader] entering Leader state
2016/03/17 06:41:55 [INFO] consul: cluster leadership acquired
2016/03/17 06:41:55 [INFO] consul: New leader elected: Node 15507
2016/03/17 06:41:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:41:56 [DEBUG] raft: Node 127.0.0.1:15508 updated peer set (2): [127.0.0.1:15508]
2016/03/17 06:41:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:41:56 [INFO] consul: member 'Node 15507' joined, marking health alive
2016/03/17 06:41:58 [INFO] consul: shutting down server
2016/03/17 06:41:58 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:59 [WARN] serf: Shutdown without a Leave
2016/03/17 06:41:59 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:41:59 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Apply (4.56s)
=== RUN   TestPreparedQuery_Apply_ACLDeny
2016/03/17 06:41:59 [INFO] raft: Node at 127.0.0.1:15512 [Follower] entering Follower state
2016/03/17 06:41:59 [INFO] serf: EventMemberJoin: Node 15511 127.0.0.1
2016/03/17 06:41:59 [INFO] consul: adding LAN server Node 15511 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/03/17 06:41:59 [INFO] serf: EventMemberJoin: Node 15511.dc1 127.0.0.1
2016/03/17 06:41:59 [INFO] consul: adding WAN server Node 15511.dc1 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/03/17 06:42:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:00 [INFO] raft: Node at 127.0.0.1:15512 [Candidate] entering Candidate state
2016/03/17 06:42:00 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:00 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:00 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:00 [INFO] raft: Node at 127.0.0.1:15512 [Leader] entering Leader state
2016/03/17 06:42:00 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:00 [INFO] consul: New leader elected: Node 15511
2016/03/17 06:42:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:00 [DEBUG] raft: Node 127.0.0.1:15512 updated peer set (2): [127.0.0.1:15512]
2016/03/17 06:42:00 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:01 [INFO] consul: member 'Node 15511' joined, marking health alive
2016/03/17 06:42:02 [WARN] consul.prepared_query: Operation on prepared query for service 'redis' denied due to ACLs
2016/03/17 06:42:03 [WARN] consul.prepared_query: Operation on prepared query '6131764f-e6f4-9c3a-25f7-78257ff54167' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:03 [WARN] consul.prepared_query: Operation on prepared query '6131764f-e6f4-9c3a-25f7-78257ff54167' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:03 [WARN] consul.prepared_query: Operation on prepared query '6131764f-e6f4-9c3a-25f7-78257ff54167' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:03 [WARN] consul.prepared_query: Operation on prepared query '6131764f-e6f4-9c3a-25f7-78257ff54167' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:06 [INFO] consul: shutting down server
2016/03/17 06:42:06 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:06 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_Apply_ACLDeny (7.12s)
=== RUN   TestPreparedQuery_Apply_ForwardLeader
2016/03/17 06:42:07 [INFO] raft: Node at 127.0.0.1:15516 [Follower] entering Follower state
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15515 127.0.0.1
2016/03/17 06:42:07 [INFO] consul: adding LAN server Node 15515 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15515.dc1 127.0.0.1
2016/03/17 06:42:07 [INFO] consul: adding WAN server Node 15515.dc1 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/17 06:42:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:07 [INFO] raft: Node at 127.0.0.1:15516 [Candidate] entering Candidate state
2016/03/17 06:42:07 [INFO] raft: Node at 127.0.0.1:15520 [Follower] entering Follower state
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/03/17 06:42:07 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15519.dc1 127.0.0.1
2016/03/17 06:42:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:49373
2016/03/17 06:42:07 [INFO] consul: adding WAN server Node 15519.dc1 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/17 06:42:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15517
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/03/17 06:42:07 [INFO] serf: EventMemberJoin: Node 15515 127.0.0.1
2016/03/17 06:42:07 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/03/17 06:42:07 [INFO] consul: adding LAN server Node 15515 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/03/17 06:42:07 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:07 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:07 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:07 [INFO] raft: Node at 127.0.0.1:15516 [Leader] entering Leader state
2016/03/17 06:42:07 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:07 [INFO] consul: New leader elected: Node 15515
2016/03/17 06:42:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:08 [INFO] raft: Node at 127.0.0.1:15520 [Candidate] entering Candidate state
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [INFO] consul: New leader elected: Node 15515
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [DEBUG] serf: messageJoinType: Node 15519
2016/03/17 06:42:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:08 [DEBUG] raft: Node 127.0.0.1:15516 updated peer set (2): [127.0.0.1:15516]
2016/03/17 06:42:08 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:08 [INFO] consul: member 'Node 15515' joined, marking health alive
2016/03/17 06:42:09 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:09 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/03/17 06:42:09 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:09 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:09 [INFO] raft: Node at 127.0.0.1:15520 [Leader] entering Leader state
2016/03/17 06:42:09 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:09 [INFO] consul: New leader elected: Node 15519
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [INFO] consul: New leader elected: Node 15519
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:09 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:09 [DEBUG] raft: Node 127.0.0.1:15520 updated peer set (2): [127.0.0.1:15520]
2016/03/17 06:42:09 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:09 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/03/17 06:42:09 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:10 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:10 [INFO] consul: member 'Node 15515' joined, marking health alive
2016/03/17 06:42:10 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:10 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:11 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:11 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:11 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:11 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:12 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:12 [INFO] consul: shutting down server
2016/03/17 06:42:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:12 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:12 [DEBUG] memberlist: Failed UDP ping: Node 15519 (timeout reached)
2016/03/17 06:42:12 [INFO] memberlist: Suspect Node 15519 has failed, no acks received
2016/03/17 06:42:12 [ERR] consul: 'Node 15519' and 'Node 15515' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:12 [ERR] consul: 'Node 15515' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:12 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:42:12 [INFO] consul: shutting down server
2016/03/17 06:42:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:12 [INFO] memberlist: Marking Node 15519 as failed, suspect timeout reached
2016/03/17 06:42:12 [INFO] serf: EventMemberFailed: Node 15519 127.0.0.1
2016/03/17 06:42:12 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:12 [INFO] consul: member 'Node 15519' failed, marking health critical
2016/03/17 06:42:13 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:42:13 [ERR] consul: failed to reconcile member: {Node 15519 127.0.0.1 15521 map[vsn_min:1 vsn_max:3 build: port:15520 bootstrap:1 role:consul dc:dc1 vsn:2] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:42:13 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestPreparedQuery_Apply_ForwardLeader (6.65s)
=== RUN   TestPreparedQuery_parseQuery
--- PASS: TestPreparedQuery_parseQuery (0.00s)
=== RUN   TestPreparedQuery_Get
2016/03/17 06:42:13 [INFO] raft: Node at 127.0.0.1:15524 [Follower] entering Follower state
2016/03/17 06:42:13 [INFO] serf: EventMemberJoin: Node 15523 127.0.0.1
2016/03/17 06:42:13 [INFO] consul: adding LAN server Node 15523 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/03/17 06:42:13 [INFO] serf: EventMemberJoin: Node 15523.dc1 127.0.0.1
2016/03/17 06:42:13 [INFO] consul: adding WAN server Node 15523.dc1 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/03/17 06:42:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:13 [INFO] raft: Node at 127.0.0.1:15524 [Candidate] entering Candidate state
2016/03/17 06:42:14 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:14 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:14 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:14 [INFO] raft: Node at 127.0.0.1:15524 [Leader] entering Leader state
2016/03/17 06:42:14 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:14 [INFO] consul: New leader elected: Node 15523
2016/03/17 06:42:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:14 [DEBUG] raft: Node 127.0.0.1:15524 updated peer set (2): [127.0.0.1:15524]
2016/03/17 06:42:14 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:14 [INFO] consul: member 'Node 15523' joined, marking health alive
2016/03/17 06:42:16 [WARN] consul.prepared_query: Request to get prepared query 'f1776da9-3f24-0d2f-592b-7cd259cb8632' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:16 [WARN] consul.prepared_query: Request to get prepared query 'f1776da9-3f24-0d2f-592b-7cd259cb8632' denied because ACL didn't match ACL used to create the query, and a management token wasn't supplied
2016/03/17 06:42:16 [INFO] consul: shutting down server
2016/03/17 06:42:16 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:16 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_Get (3.66s)
=== RUN   TestPreparedQuery_List
2016/03/17 06:42:17 [INFO] raft: Node at 127.0.0.1:15528 [Follower] entering Follower state
2016/03/17 06:42:17 [INFO] serf: EventMemberJoin: Node 15527 127.0.0.1
2016/03/17 06:42:17 [INFO] consul: adding LAN server Node 15527 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/03/17 06:42:17 [INFO] serf: EventMemberJoin: Node 15527.dc1 127.0.0.1
2016/03/17 06:42:17 [INFO] consul: adding WAN server Node 15527.dc1 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/03/17 06:42:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:17 [INFO] raft: Node at 127.0.0.1:15528 [Candidate] entering Candidate state
2016/03/17 06:42:17 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:17 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:17 [INFO] raft: Node at 127.0.0.1:15528 [Leader] entering Leader state
2016/03/17 06:42:17 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:17 [INFO] consul: New leader elected: Node 15527
2016/03/17 06:42:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:17 [DEBUG] raft: Node 127.0.0.1:15528 updated peer set (2): [127.0.0.1:15528]
2016/03/17 06:42:18 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:18 [INFO] consul: member 'Node 15527' joined, marking health alive
2016/03/17 06:42:19 [WARN] consul.prepared_query: Request to list prepared queries denied due to ACLs
2016/03/17 06:42:19 [WARN] consul.prepared_query: Request to list prepared queries denied due to ACLs
2016/03/17 06:42:19 [INFO] consul: shutting down server
2016/03/17 06:42:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:19 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestPreparedQuery_List (3.18s)
=== RUN   TestPreparedQuery_Execute
2016/03/17 06:42:20 [INFO] raft: Node at 127.0.0.1:15532 [Follower] entering Follower state
2016/03/17 06:42:20 [INFO] serf: EventMemberJoin: Node 15531 127.0.0.1
2016/03/17 06:42:20 [INFO] consul: adding LAN server Node 15531 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/17 06:42:20 [INFO] serf: EventMemberJoin: Node 15531.dc1 127.0.0.1
2016/03/17 06:42:20 [INFO] consul: adding WAN server Node 15531.dc1 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/17 06:42:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:20 [INFO] raft: Node at 127.0.0.1:15532 [Candidate] entering Candidate state
2016/03/17 06:42:21 [INFO] raft: Node at 127.0.0.1:15536 [Follower] entering Follower state
2016/03/17 06:42:21 [INFO] serf: EventMemberJoin: Node 15535 127.0.0.1
2016/03/17 06:42:21 [INFO] consul: adding LAN server Node 15535 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/17 06:42:21 [INFO] serf: EventMemberJoin: Node 15535.dc2 127.0.0.1
2016/03/17 06:42:21 [INFO] consul: adding WAN server Node 15535.dc2 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/17 06:42:21 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:21 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:21 [INFO] raft: Node at 127.0.0.1:15532 [Leader] entering Leader state
2016/03/17 06:42:21 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:21 [INFO] consul: New leader elected: Node 15531
2016/03/17 06:42:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:21 [INFO] raft: Node at 127.0.0.1:15536 [Candidate] entering Candidate state
2016/03/17 06:42:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:21 [DEBUG] raft: Node 127.0.0.1:15532 updated peer set (2): [127.0.0.1:15532]
2016/03/17 06:42:21 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:22 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:22 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:22 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:22 [INFO] raft: Node at 127.0.0.1:15536 [Leader] entering Leader state
2016/03/17 06:42:22 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:22 [INFO] consul: New leader elected: Node 15535
2016/03/17 06:42:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:22 [INFO] consul: member 'Node 15531' joined, marking health alive
2016/03/17 06:42:22 [DEBUG] raft: Node 127.0.0.1:15536 updated peer set (2): [127.0.0.1:15536]
2016/03/17 06:42:22 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:22 [INFO] consul: member 'Node 15535' joined, marking health alive
2016/03/17 06:42:23 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15534
2016/03/17 06:42:23 [DEBUG] memberlist: TCP connection from=127.0.0.1:50676
2016/03/17 06:42:23 [INFO] serf: EventMemberJoin: Node 15535.dc2 127.0.0.1
2016/03/17 06:42:23 [INFO] consul: adding WAN server Node 15535.dc2 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/17 06:42:23 [INFO] serf: EventMemberJoin: Node 15531.dc1 127.0.0.1
2016/03/17 06:42:23 [INFO] consul: adding WAN server Node 15531.dc1 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:23 [DEBUG] serf: messageJoinType: Node 15535.dc2
2016/03/17 06:42:37 [INFO] consul: shutting down server
2016/03/17 06:42:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:38 [DEBUG] memberlist: Failed UDP ping: Node 15535.dc2 (timeout reached)
2016/03/17 06:42:38 [INFO] memberlist: Suspect Node 15535.dc2 has failed, no acks received
2016/03/17 06:42:38 [DEBUG] memberlist: Failed UDP ping: Node 15535.dc2 (timeout reached)
2016/03/17 06:42:38 [INFO] memberlist: Suspect Node 15535.dc2 has failed, no acks received
2016/03/17 06:42:38 [INFO] memberlist: Marking Node 15535.dc2 as failed, suspect timeout reached
2016/03/17 06:42:38 [INFO] serf: EventMemberFailed: Node 15535.dc2 127.0.0.1
2016/03/17 06:42:38 [INFO] consul: removing WAN server Node 15535.dc2 (Addr: 127.0.0.1:15536) (DC: dc2)
2016/03/17 06:42:49 [INFO] consul: shutting down server
2016/03/17 06:42:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:42:50 [WARN] serf: Shutdown without a Leave
--- FAIL: TestPreparedQuery_Execute (31.32s)
	prepared_query_endpoint_test.go:1181: bad: {foo [{0x10f03d60 0x10ecd040 []} {0x10f03dc0 0x10ecd080 []} {0x10f03e20 0x10ecd0c0 []} {0x10f03ea0 0x10ecd100 []} {0x10f03f20 0x10ecd140 []} {0x10f03f80 0x10ecd1c0 []} {0x10f03fe0 0x10ecd200 []} {0x10cc2400 0x10ecd240 []} {0x10cc2460 0x10ecd2c0 []} {0x10cc24c0 0x10ecd480 []}] {10s} dc1 0 {0 0 true}}
=== RUN   TestPreparedQuery_Execute_ForwardLeader
2016/03/17 06:42:52 [INFO] raft: Node at 127.0.0.1:15540 [Follower] entering Follower state
2016/03/17 06:42:52 [INFO] serf: EventMemberJoin: Node 15539 127.0.0.1
2016/03/17 06:42:52 [INFO] consul: adding LAN server Node 15539 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/17 06:42:52 [INFO] serf: EventMemberJoin: Node 15539.dc1 127.0.0.1
2016/03/17 06:42:52 [INFO] consul: adding WAN server Node 15539.dc1 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/17 06:42:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:52 [INFO] raft: Node at 127.0.0.1:15540 [Candidate] entering Candidate state
2016/03/17 06:42:53 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:53 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:53 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:53 [INFO] raft: Node at 127.0.0.1:15540 [Leader] entering Leader state
2016/03/17 06:42:53 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:53 [INFO] consul: New leader elected: Node 15539
2016/03/17 06:42:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:53 [DEBUG] raft: Node 127.0.0.1:15540 updated peer set (2): [127.0.0.1:15540]
2016/03/17 06:42:53 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:53 [INFO] consul: member 'Node 15539' joined, marking health alive
2016/03/17 06:42:56 [INFO] raft: Node at 127.0.0.1:15544 [Follower] entering Follower state
2016/03/17 06:42:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:42:56 [INFO] raft: Node at 127.0.0.1:15544 [Candidate] entering Candidate state
2016/03/17 06:42:57 [INFO] serf: EventMemberJoin: Node 15543 127.0.0.1
2016/03/17 06:42:57 [INFO] consul: adding LAN server Node 15543 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/17 06:42:57 [DEBUG] raft: Votes needed: 1
2016/03/17 06:42:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:42:57 [INFO] raft: Election won. Tally: 1
2016/03/17 06:42:57 [INFO] raft: Node at 127.0.0.1:15544 [Leader] entering Leader state
2016/03/17 06:42:57 [INFO] consul: cluster leadership acquired
2016/03/17 06:42:57 [INFO] consul: New leader elected: Node 15543
2016/03/17 06:42:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:42:58 [DEBUG] raft: Node 127.0.0.1:15544 updated peer set (2): [127.0.0.1:15544]
2016/03/17 06:42:58 [INFO] serf: EventMemberJoin: Node 15543.dc1 127.0.0.1
2016/03/17 06:42:58 [INFO] consul: adding WAN server Node 15543.dc1 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/17 06:42:58 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15541
2016/03/17 06:42:58 [DEBUG] memberlist: TCP connection from=127.0.0.1:50618
2016/03/17 06:42:58 [INFO] serf: EventMemberJoin: Node 15543 127.0.0.1
2016/03/17 06:42:58 [INFO] consul: adding LAN server Node 15543 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/03/17 06:42:58 [INFO] serf: EventMemberJoin: Node 15539 127.0.0.1
2016/03/17 06:42:58 [INFO] consul: adding LAN server Node 15539 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/03/17 06:42:58 [INFO] consul: New leader elected: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:42:58 [DEBUG] serf: messageJoinType: Node 15543
2016/03/17 06:42:58 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:58 [INFO] consul: member 'Node 15543' joined, marking health alive
2016/03/17 06:42:59 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:42:59 [INFO] consul: member 'Node 15543' joined, marking health alive
2016/03/17 06:42:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:59 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:59 [INFO] consul: member 'Node 15539' joined, marking health alive
2016/03/17 06:42:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:42:59 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:00 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:00 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:00 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:01 [INFO] consul: shutting down server
2016/03/17 06:43:01 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:01 [DEBUG] memberlist: Failed UDP ping: Node 15543 (timeout reached)
2016/03/17 06:43:02 [INFO] memberlist: Suspect Node 15543 has failed, no acks received
2016/03/17 06:43:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:02 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:02 [ERR] consul: 'Node 15539' and 'Node 15543' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:02 [INFO] consul: shutting down server
2016/03/17 06:43:02 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:02 [ERR] consul: 'Node 15543' and 'Node 15539' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:02 [INFO] memberlist: Marking Node 15543 as failed, suspect timeout reached
2016/03/17 06:43:02 [INFO] serf: EventMemberFailed: Node 15543 127.0.0.1
2016/03/17 06:43:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_Execute_ForwardLeader (11.00s)
=== RUN   TestPreparedQuery_tagFilter
--- PASS: TestPreparedQuery_tagFilter (0.00s)
=== RUN   TestPreparedQuery_Wrapper
2016/03/17 06:43:02 [INFO] raft: Node at 127.0.0.1:15548 [Follower] entering Follower state
2016/03/17 06:43:02 [INFO] serf: EventMemberJoin: Node 15547 127.0.0.1
2016/03/17 06:43:02 [INFO] consul: adding LAN server Node 15547 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/17 06:43:02 [INFO] serf: EventMemberJoin: Node 15547.dc1 127.0.0.1
2016/03/17 06:43:02 [INFO] consul: adding WAN server Node 15547.dc1 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/17 06:43:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:02 [INFO] raft: Node at 127.0.0.1:15548 [Candidate] entering Candidate state
2016/03/17 06:43:03 [INFO] raft: Node at 127.0.0.1:15552 [Follower] entering Follower state
2016/03/17 06:43:03 [INFO] serf: EventMemberJoin: Node 15551 127.0.0.1
2016/03/17 06:43:03 [INFO] consul: adding LAN server Node 15551 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/17 06:43:03 [INFO] serf: EventMemberJoin: Node 15551.dc2 127.0.0.1
2016/03/17 06:43:03 [INFO] consul: adding WAN server Node 15551.dc2 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/17 06:43:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:03 [INFO] raft: Node at 127.0.0.1:15552 [Candidate] entering Candidate state
2016/03/17 06:43:03 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:03 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:03 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:03 [INFO] raft: Node at 127.0.0.1:15548 [Leader] entering Leader state
2016/03/17 06:43:03 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:03 [INFO] consul: New leader elected: Node 15547
2016/03/17 06:43:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:04 [DEBUG] raft: Node 127.0.0.1:15548 updated peer set (2): [127.0.0.1:15548]
2016/03/17 06:43:04 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:04 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:04 [INFO] raft: Node at 127.0.0.1:15552 [Leader] entering Leader state
2016/03/17 06:43:04 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:04 [INFO] consul: New leader elected: Node 15551
2016/03/17 06:43:04 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:05 [DEBUG] raft: Node 127.0.0.1:15552 updated peer set (2): [127.0.0.1:15552]
2016/03/17 06:43:05 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:05 [INFO] consul: member 'Node 15551' joined, marking health alive
2016/03/17 06:43:05 [INFO] consul: member 'Node 15547' joined, marking health alive
2016/03/17 06:43:05 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15550
2016/03/17 06:43:05 [DEBUG] memberlist: TCP connection from=127.0.0.1:42542
2016/03/17 06:43:05 [INFO] serf: EventMemberJoin: Node 15551.dc2 127.0.0.1
2016/03/17 06:43:05 [INFO] serf: EventMemberJoin: Node 15547.dc1 127.0.0.1
2016/03/17 06:43:05 [INFO] consul: adding WAN server Node 15551.dc2 (Addr: 127.0.0.1:15552) (DC: dc2)
2016/03/17 06:43:05 [INFO] consul: adding WAN server Node 15547.dc1 (Addr: 127.0.0.1:15548) (DC: dc1)
2016/03/17 06:43:05 [DEBUG] Test
2016/03/17 06:43:05 [INFO] consul: shutting down server
2016/03/17 06:43:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [DEBUG] serf: messageJoinType: Node 15551.dc2
2016/03/17 06:43:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:06 [DEBUG] memberlist: Failed UDP ping: Node 15551.dc2 (timeout reached)
2016/03/17 06:43:06 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:43:06 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:43:06 [INFO] consul: shutting down server
2016/03/17 06:43:06 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:06 [INFO] memberlist: Suspect Node 15551.dc2 has failed, no acks received
2016/03/17 06:43:06 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:06 [DEBUG] memberlist: Failed UDP ping: Node 15551.dc2 (timeout reached)
--- PASS: TestPreparedQuery_Wrapper (4.10s)
=== RUN   TestPreparedQuery_queryFailover
--- PASS: TestPreparedQuery_queryFailover (0.00s)
=== RUN   TestRTT_sortNodesByDistanceFrom
2016/03/17 06:43:06 [INFO] memberlist: Suspect Node 15551.dc2 has failed, no acks received
2016/03/17 06:43:06 [INFO] memberlist: Marking Node 15551.dc2 as failed, suspect timeout reached
2016/03/17 06:43:06 [INFO] serf: EventMemberFailed: Node 15551.dc2 127.0.0.1
2016/03/17 06:43:07 [INFO] raft: Node at 127.0.0.1:15556 [Follower] entering Follower state
2016/03/17 06:43:07 [INFO] serf: EventMemberJoin: Node 15555 127.0.0.1
2016/03/17 06:43:07 [INFO] consul: adding LAN server Node 15555 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/03/17 06:43:07 [INFO] serf: EventMemberJoin: Node 15555.dc1 127.0.0.1
2016/03/17 06:43:07 [INFO] consul: adding WAN server Node 15555.dc1 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/03/17 06:43:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:07 [INFO] raft: Node at 127.0.0.1:15556 [Candidate] entering Candidate state
2016/03/17 06:43:07 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:07 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:07 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:07 [INFO] raft: Node at 127.0.0.1:15556 [Leader] entering Leader state
2016/03/17 06:43:07 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:07 [INFO] consul: New leader elected: Node 15555
2016/03/17 06:43:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:07 [DEBUG] raft: Node 127.0.0.1:15556 updated peer set (2): [127.0.0.1:15556]
2016/03/17 06:43:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:07 [INFO] consul: member 'Node 15555' joined, marking health alive
2016/03/17 06:43:10 [INFO] consul: shutting down server
2016/03/17 06:43:10 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:10 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:11 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom (4.78s)
	rtt_test.go:37: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_Nodes
2016/03/17 06:43:11 [INFO] raft: Node at 127.0.0.1:15560 [Follower] entering Follower state
2016/03/17 06:43:11 [INFO] serf: EventMemberJoin: Node 15559 127.0.0.1
2016/03/17 06:43:11 [INFO] consul: adding LAN server Node 15559 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/03/17 06:43:11 [INFO] serf: EventMemberJoin: Node 15559.dc1 127.0.0.1
2016/03/17 06:43:11 [INFO] consul: adding WAN server Node 15559.dc1 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/03/17 06:43:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:11 [INFO] raft: Node at 127.0.0.1:15560 [Candidate] entering Candidate state
2016/03/17 06:43:12 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:12 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:12 [INFO] raft: Node at 127.0.0.1:15560 [Leader] entering Leader state
2016/03/17 06:43:12 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:12 [INFO] consul: New leader elected: Node 15559
2016/03/17 06:43:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:12 [DEBUG] raft: Node 127.0.0.1:15560 updated peer set (2): [127.0.0.1:15560]
2016/03/17 06:43:12 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:12 [INFO] consul: member 'Node 15559' joined, marking health alive
2016/03/17 06:43:15 [INFO] consul: shutting down server
2016/03/17 06:43:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:15 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:16 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom_Nodes (4.88s)
	rtt_test.go:37: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_ServiceNodes
2016/03/17 06:43:16 [INFO] raft: Node at 127.0.0.1:15564 [Follower] entering Follower state
2016/03/17 06:43:16 [INFO] serf: EventMemberJoin: Node 15563 127.0.0.1
2016/03/17 06:43:16 [INFO] consul: adding LAN server Node 15563 (Addr: 127.0.0.1:15564) (DC: dc1)
2016/03/17 06:43:16 [INFO] serf: EventMemberJoin: Node 15563.dc1 127.0.0.1
2016/03/17 06:43:16 [INFO] consul: adding WAN server Node 15563.dc1 (Addr: 127.0.0.1:15564) (DC: dc1)
2016/03/17 06:43:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:16 [INFO] raft: Node at 127.0.0.1:15564 [Candidate] entering Candidate state
2016/03/17 06:43:17 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:17 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:17 [INFO] raft: Node at 127.0.0.1:15564 [Leader] entering Leader state
2016/03/17 06:43:17 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:17 [INFO] consul: New leader elected: Node 15563
2016/03/17 06:43:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:17 [DEBUG] raft: Node 127.0.0.1:15564 updated peer set (2): [127.0.0.1:15564]
2016/03/17 06:43:17 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:17 [INFO] consul: member 'Node 15563' joined, marking health alive
2016/03/17 06:43:29 [INFO] consul: shutting down server
2016/03/17 06:43:29 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:30 [WARN] serf: Shutdown without a Leave
--- FAIL: TestRTT_sortNodesByDistanceFrom_ServiceNodes (15.21s)
	rtt_test.go:50: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_HealthChecks
2016/03/17 06:43:31 [INFO] raft: Node at 127.0.0.1:15568 [Follower] entering Follower state
2016/03/17 06:43:32 [INFO] serf: EventMemberJoin: Node 15567 127.0.0.1
2016/03/17 06:43:32 [INFO] consul: adding LAN server Node 15567 (Addr: 127.0.0.1:15568) (DC: dc1)
2016/03/17 06:43:32 [INFO] serf: EventMemberJoin: Node 15567.dc1 127.0.0.1
2016/03/17 06:43:32 [INFO] consul: adding WAN server Node 15567.dc1 (Addr: 127.0.0.1:15568) (DC: dc1)
2016/03/17 06:43:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:32 [INFO] raft: Node at 127.0.0.1:15568 [Candidate] entering Candidate state
2016/03/17 06:43:32 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:32 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:32 [INFO] raft: Node at 127.0.0.1:15568 [Leader] entering Leader state
2016/03/17 06:43:32 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:32 [INFO] consul: New leader elected: Node 15567
2016/03/17 06:43:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:33 [DEBUG] raft: Node 127.0.0.1:15568 updated peer set (2): [127.0.0.1:15568]
2016/03/17 06:43:33 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:33 [INFO] consul: member 'Node 15567' joined, marking health alive
2016/03/17 06:43:35 [INFO] consul: shutting down server
2016/03/17 06:43:35 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:35 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:43:35 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestRTT_sortNodesByDistanceFrom_HealthChecks (4.35s)
	rtt_test.go:63: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_sortNodesByDistanceFrom_CheckServiceNodes
2016/03/17 06:43:36 [INFO] raft: Node at 127.0.0.1:15572 [Follower] entering Follower state
2016/03/17 06:43:36 [INFO] serf: EventMemberJoin: Node 15571 127.0.0.1
2016/03/17 06:43:36 [INFO] consul: adding LAN server Node 15571 (Addr: 127.0.0.1:15572) (DC: dc1)
2016/03/17 06:43:36 [INFO] serf: EventMemberJoin: Node 15571.dc1 127.0.0.1
2016/03/17 06:43:36 [INFO] consul: adding WAN server Node 15571.dc1 (Addr: 127.0.0.1:15572) (DC: dc1)
2016/03/17 06:43:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:36 [INFO] raft: Node at 127.0.0.1:15572 [Candidate] entering Candidate state
2016/03/17 06:43:36 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:36 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:36 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:36 [INFO] raft: Node at 127.0.0.1:15572 [Leader] entering Leader state
2016/03/17 06:43:36 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:36 [INFO] consul: New leader elected: Node 15571
2016/03/17 06:43:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:36 [DEBUG] raft: Node 127.0.0.1:15572 updated peer set (2): [127.0.0.1:15572]
2016/03/17 06:43:37 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:37 [INFO] consul: member 'Node 15571' joined, marking health alive
2016/03/17 06:43:39 [INFO] consul: shutting down server
2016/03/17 06:43:39 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:39 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:39 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:43:39 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:43:39 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
--- FAIL: TestRTT_sortNodesByDistanceFrom_CheckServiceNodes (4.16s)
	rtt_test.go:76: bad sort: apple,node1,node2,node3,node4,node5 != node1,node4,node5,node2,node3,apple
=== RUN   TestRTT_getDatacenterDistance
--- PASS: TestRTT_getDatacenterDistance (0.00s)
=== RUN   TestRTT_sortDatacentersByDistance
--- PASS: TestRTT_sortDatacentersByDistance (0.00s)
=== RUN   TestRTT_getDatacenterMaps
--- PASS: TestRTT_getDatacenterMaps (0.00s)
=== RUN   TestRTT_getDatacentersByDistance
2016/03/17 06:43:40 [INFO] raft: Node at 127.0.0.1:15576 [Follower] entering Follower state
2016/03/17 06:43:40 [INFO] serf: EventMemberJoin: Node 15575 127.0.0.1
2016/03/17 06:43:40 [INFO] consul: adding LAN server Node 15575 (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/17 06:43:40 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/17 06:43:40 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/17 06:43:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:40 [INFO] raft: Node at 127.0.0.1:15576 [Candidate] entering Candidate state
2016/03/17 06:43:41 [INFO] raft: Node at 127.0.0.1:15580 [Follower] entering Follower state
2016/03/17 06:43:41 [INFO] serf: EventMemberJoin: Node 15579 127.0.0.1
2016/03/17 06:43:41 [INFO] consul: adding LAN server Node 15579 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/17 06:43:41 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/17 06:43:41 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/17 06:43:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:41 [INFO] raft: Node at 127.0.0.1:15580 [Candidate] entering Candidate state
2016/03/17 06:43:41 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:41 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:41 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:41 [INFO] raft: Node at 127.0.0.1:15576 [Leader] entering Leader state
2016/03/17 06:43:41 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:41 [INFO] consul: New leader elected: Node 15575
2016/03/17 06:43:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:41 [DEBUG] raft: Node 127.0.0.1:15576 updated peer set (2): [127.0.0.1:15576]
2016/03/17 06:43:41 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:41 [INFO] consul: member 'Node 15575' joined, marking health alive
2016/03/17 06:43:41 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:41 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:41 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:41 [INFO] raft: Node at 127.0.0.1:15580 [Leader] entering Leader state
2016/03/17 06:43:41 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:41 [INFO] consul: New leader elected: Node 15579
2016/03/17 06:43:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:42 [INFO] raft: Node at 127.0.0.1:15584 [Follower] entering Follower state
2016/03/17 06:43:42 [INFO] serf: EventMemberJoin: Node 15583 127.0.0.1
2016/03/17 06:43:42 [INFO] consul: adding LAN server Node 15583 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/17 06:43:42 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/17 06:43:42 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/17 06:43:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:42 [INFO] raft: Node at 127.0.0.1:15584 [Candidate] entering Candidate state
2016/03/17 06:43:42 [DEBUG] raft: Node 127.0.0.1:15580 updated peer set (2): [127.0.0.1:15580]
2016/03/17 06:43:42 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:42 [INFO] consul: member 'Node 15579' joined, marking health alive
2016/03/17 06:43:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:42 [INFO] raft: Node at 127.0.0.1:15584 [Leader] entering Leader state
2016/03/17 06:43:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:42 [INFO] consul: New leader elected: Node 15583
2016/03/17 06:43:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:43 [DEBUG] raft: Node 127.0.0.1:15584 updated peer set (2): [127.0.0.1:15584]
2016/03/17 06:43:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:43 [INFO] consul: member 'Node 15583' joined, marking health alive
2016/03/17 06:43:43 [DEBUG] memberlist: TCP connection from=127.0.0.1:43839
2016/03/17 06:43:43 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15578
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/17 06:43:43 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15578
2016/03/17 06:43:43 [DEBUG] memberlist: TCP connection from=127.0.0.1:43840
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15579.dc1 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15579.dc1 (Addr: 127.0.0.1:15580) (DC: dc1)
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15575.xxx 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15575.xxx (Addr: 127.0.0.1:15576) (DC: xxx)
2016/03/17 06:43:43 [INFO] consul: shutting down server
2016/03/17 06:43:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:43 [INFO] serf: EventMemberJoin: Node 15583.dc2 127.0.0.1
2016/03/17 06:43:43 [INFO] consul: adding WAN server Node 15583.dc2 (Addr: 127.0.0.1:15584) (DC: dc2)
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15579.dc1
2016/03/17 06:43:43 [DEBUG] serf: messageJoinType: Node 15583.dc2
2016/03/17 06:43:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:43 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:43:43 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:43:43 [INFO] consul: shutting down server
2016/03/17 06:43:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:43 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/17 06:43:43 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/17 06:43:43 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/17 06:43:43 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/17 06:43:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:43 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:43:43 [DEBUG] memberlist: Failed UDP ping: Node 15583.dc2 (timeout reached)
2016/03/17 06:43:43 [INFO] consul: shutting down server
2016/03/17 06:43:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:43 [INFO] memberlist: Suspect Node 15583.dc2 has failed, no acks received
2016/03/17 06:43:43 [INFO] memberlist: Marking Node 15583.dc2 as failed, suspect timeout reached
2016/03/17 06:43:43 [INFO] serf: EventMemberFailed: Node 15583.dc2 127.0.0.1
2016/03/17 06:43:43 [INFO] memberlist: Marking Node 15583.dc2 as failed, suspect timeout reached
2016/03/17 06:43:43 [INFO] serf: EventMemberFailed: Node 15583.dc2 127.0.0.1
2016/03/17 06:43:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:44 [DEBUG] memberlist: Failed UDP ping: Node 15579.dc1 (timeout reached)
2016/03/17 06:43:44 [INFO] memberlist: Suspect Node 15579.dc1 has failed, no acks received
--- PASS: TestRTT_getDatacentersByDistance (4.39s)
=== RUN   TestUserEventNames
--- PASS: TestUserEventNames (0.00s)
=== RUN   TestServer_StartStop
2016/03/17 06:43:44 [INFO] memberlist: Marking Node 15579.dc1 as failed, suspect timeout reached
2016/03/17 06:43:44 [INFO] serf: EventMemberFailed: Node 15579.dc1 127.0.0.1
2016/03/17 06:43:44 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state
2016/03/17 06:43:44 [INFO] serf: EventMemberJoin: bm-wb-04 127.0.0.1
2016/03/17 06:43:44 [INFO] consul: adding LAN server bm-wb-04 (Addr: 127.0.0.1:8300) (DC: dc1)
2016/03/17 06:43:44 [INFO] serf: EventMemberJoin: bm-wb-04.dc1 127.0.0.1
2016/03/17 06:43:44 [INFO] consul: shutting down server
2016/03/17 06:43:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:44 [INFO] consul: shutting down server
--- PASS: TestServer_StartStop (0.63s)
=== RUN   TestServer_JoinLAN
2016/03/17 06:43:45 [INFO] raft: Node at 127.0.0.1:15588 [Follower] entering Follower state
2016/03/17 06:43:45 [INFO] serf: EventMemberJoin: Node 15587 127.0.0.1
2016/03/17 06:43:45 [INFO] consul: adding LAN server Node 15587 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/17 06:43:45 [INFO] serf: EventMemberJoin: Node 15587.dc1 127.0.0.1
2016/03/17 06:43:45 [INFO] consul: adding WAN server Node 15587.dc1 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/17 06:43:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:45 [INFO] raft: Node at 127.0.0.1:15588 [Candidate] entering Candidate state
2016/03/17 06:43:46 [INFO] raft: Node at 127.0.0.1:15592 [Follower] entering Follower state
2016/03/17 06:43:46 [INFO] serf: EventMemberJoin: Node 15591 127.0.0.1
2016/03/17 06:43:46 [INFO] consul: adding LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/17 06:43:46 [INFO] serf: EventMemberJoin: Node 15591.dc1 127.0.0.1
2016/03/17 06:43:46 [INFO] consul: adding WAN server Node 15591.dc1 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/17 06:43:46 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15589
2016/03/17 06:43:46 [DEBUG] memberlist: TCP connection from=127.0.0.1:40994
2016/03/17 06:43:46 [INFO] serf: EventMemberJoin: Node 15591 127.0.0.1
2016/03/17 06:43:46 [INFO] consul: adding LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/17 06:43:46 [INFO] serf: EventMemberJoin: Node 15587 127.0.0.1
2016/03/17 06:43:46 [INFO] consul: adding LAN server Node 15587 (Addr: 127.0.0.1:15588) (DC: dc1)
2016/03/17 06:43:46 [INFO] consul: shutting down server
2016/03/17 06:43:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:46 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:46 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:46 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:46 [INFO] raft: Node at 127.0.0.1:15588 [Leader] entering Leader state
2016/03/17 06:43:46 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:46 [INFO] consul: New leader elected: Node 15587
2016/03/17 06:43:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:46 [INFO] raft: Node at 127.0.0.1:15592 [Candidate] entering Candidate state
2016/03/17 06:43:46 [DEBUG] memberlist: Failed UDP ping: Node 15591 (timeout reached)
2016/03/17 06:43:46 [INFO] memberlist: Suspect Node 15591 has failed, no acks received
2016/03/17 06:43:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:46 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:46 [DEBUG] memberlist: Failed UDP ping: Node 15591 (timeout reached)
2016/03/17 06:43:46 [INFO] memberlist: Suspect Node 15591 has failed, no acks received
2016/03/17 06:43:46 [INFO] memberlist: Marking Node 15591 as failed, suspect timeout reached
2016/03/17 06:43:46 [INFO] serf: EventMemberFailed: Node 15591 127.0.0.1
2016/03/17 06:43:46 [INFO] consul: removing LAN server Node 15591 (Addr: 127.0.0.1:15592) (DC: dc1)
2016/03/17 06:43:46 [DEBUG] memberlist: Failed UDP ping: Node 15591 (timeout reached)
2016/03/17 06:43:46 [INFO] memberlist: Suspect Node 15591 has failed, no acks received
2016/03/17 06:43:46 [DEBUG] raft: Node 127.0.0.1:15588 updated peer set (2): [127.0.0.1:15588]
2016/03/17 06:43:46 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:46 [INFO] consul: member 'Node 15587' joined, marking health alive
2016/03/17 06:43:46 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:46 [INFO] consul: member 'Node 15591' failed, marking health critical
2016/03/17 06:43:46 [INFO] consul: shutting down server
2016/03/17 06:43:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:46 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_JoinLAN (2.26s)
=== RUN   TestServer_JoinWAN
2016/03/17 06:43:47 [INFO] raft: Node at 127.0.0.1:15596 [Follower] entering Follower state
2016/03/17 06:43:47 [INFO] serf: EventMemberJoin: Node 15595 127.0.0.1
2016/03/17 06:43:47 [INFO] consul: adding LAN server Node 15595 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/17 06:43:47 [INFO] serf: EventMemberJoin: Node 15595.dc1 127.0.0.1
2016/03/17 06:43:47 [INFO] consul: adding WAN server Node 15595.dc1 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/17 06:43:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:47 [INFO] raft: Node at 127.0.0.1:15596 [Candidate] entering Candidate state
2016/03/17 06:43:48 [INFO] raft: Node at 127.0.0.1:15600 [Follower] entering Follower state
2016/03/17 06:43:48 [INFO] serf: EventMemberJoin: Node 15599 127.0.0.1
2016/03/17 06:43:48 [INFO] consul: adding LAN server Node 15599 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/17 06:43:48 [INFO] serf: EventMemberJoin: Node 15599.dc2 127.0.0.1
2016/03/17 06:43:48 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15598
2016/03/17 06:43:48 [INFO] consul: adding WAN server Node 15599.dc2 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/17 06:43:48 [DEBUG] memberlist: TCP connection from=127.0.0.1:41535
2016/03/17 06:43:48 [INFO] serf: EventMemberJoin: Node 15595.dc1 127.0.0.1
2016/03/17 06:43:48 [INFO] consul: adding WAN server Node 15595.dc1 (Addr: 127.0.0.1:15596) (DC: dc1)
2016/03/17 06:43:48 [INFO] serf: EventMemberJoin: Node 15599.dc2 127.0.0.1
2016/03/17 06:43:48 [INFO] consul: adding WAN server Node 15599.dc2 (Addr: 127.0.0.1:15600) (DC: dc2)
2016/03/17 06:43:48 [INFO] consul: shutting down server
2016/03/17 06:43:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:48 [INFO] raft: Node at 127.0.0.1:15600 [Candidate] entering Candidate state
2016/03/17 06:43:48 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:48 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:48 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:48 [INFO] raft: Node at 127.0.0.1:15596 [Leader] entering Leader state
2016/03/17 06:43:48 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:48 [INFO] consul: New leader elected: Node 15595
2016/03/17 06:43:48 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/17 06:43:48 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/17 06:43:48 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/17 06:43:48 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/17 06:43:48 [DEBUG] serf: messageJoinType: Node 15599.dc2
2016/03/17 06:43:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:48 [DEBUG] raft: Node 127.0.0.1:15596 updated peer set (2): [127.0.0.1:15596]
2016/03/17 06:43:48 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:48 [INFO] consul: member 'Node 15595' joined, marking health alive
2016/03/17 06:43:48 [DEBUG] memberlist: Failed UDP ping: Node 15599.dc2 (timeout reached)
2016/03/17 06:43:49 [INFO] memberlist: Suspect Node 15599.dc2 has failed, no acks received
2016/03/17 06:43:49 [DEBUG] memberlist: Failed UDP ping: Node 15599.dc2 (timeout reached)
2016/03/17 06:43:49 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:49 [INFO] consul: shutting down server
2016/03/17 06:43:49 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:49 [INFO] memberlist: Suspect Node 15599.dc2 has failed, no acks received
2016/03/17 06:43:49 [INFO] memberlist: Marking Node 15599.dc2 as failed, suspect timeout reached
2016/03/17 06:43:49 [INFO] serf: EventMemberFailed: Node 15599.dc2 127.0.0.1
2016/03/17 06:43:49 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_JoinWAN (2.29s)
=== RUN   TestServer_JoinSeparateLanAndWanAddresses
2016/03/17 06:43:49 [INFO] raft: Node at 127.0.0.1:15604 [Follower] entering Follower state
2016/03/17 06:43:49 [INFO] serf: EventMemberJoin: Node 15603 127.0.0.1
2016/03/17 06:43:49 [INFO] consul: adding LAN server Node 15603 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/17 06:43:49 [INFO] serf: EventMemberJoin: Node 15603.dc1 127.0.0.1
2016/03/17 06:43:49 [INFO] consul: adding WAN server Node 15603.dc1 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/17 06:43:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:49 [INFO] raft: Node at 127.0.0.1:15604 [Candidate] entering Candidate state
2016/03/17 06:43:50 [INFO] raft: Node at 127.0.0.1:15608 [Follower] entering Follower state
2016/03/17 06:43:50 [INFO] serf: EventMemberJoin: s2 127.0.0.3
2016/03/17 06:43:50 [INFO] consul: adding LAN server s2 (Addr: 127.0.0.3:15608) (DC: dc2)
2016/03/17 06:43:50 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/17 06:43:50 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/17 06:43:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:50 [INFO] raft: Node at 127.0.0.1:15608 [Candidate] entering Candidate state
2016/03/17 06:43:50 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:50 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:50 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:50 [INFO] raft: Node at 127.0.0.1:15604 [Leader] entering Leader state
2016/03/17 06:43:50 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:50 [INFO] consul: New leader elected: Node 15603
2016/03/17 06:43:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:50 [DEBUG] raft: Node 127.0.0.1:15604 updated peer set (2): [127.0.0.1:15604]
2016/03/17 06:43:50 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:50 [INFO] consul: member 'Node 15603' joined, marking health alive
2016/03/17 06:43:51 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:51 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:51 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:51 [INFO] raft: Node at 127.0.0.1:15608 [Leader] entering Leader state
2016/03/17 06:43:51 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:51 [INFO] consul: New leader elected: s2
2016/03/17 06:43:51 [INFO] raft: Node at 127.0.0.1:15612 [Follower] entering Follower state
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: Node 15611 127.0.0.1
2016/03/17 06:43:51 [INFO] consul: adding LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: Node 15611.dc2 127.0.0.1
2016/03/17 06:43:51 [INFO] consul: adding WAN server Node 15611.dc2 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/17 06:43:51 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15606
2016/03/17 06:43:51 [DEBUG] memberlist: TCP connection from=127.0.0.1:57615
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/17 06:43:51 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: Node 15603.dc1 127.0.0.1
2016/03/17 06:43:51 [INFO] consul: adding WAN server Node 15603.dc1 (Addr: 127.0.0.1:15604) (DC: dc1)
2016/03/17 06:43:51 [DEBUG] memberlist: TCP connection from=127.0.0.1:46050
2016/03/17 06:43:51 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15609
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: Node 15611 127.0.0.1
2016/03/17 06:43:51 [INFO] serf: EventMemberJoin: s2 127.0.0.3
2016/03/17 06:43:51 [INFO] consul: adding LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/17 06:43:51 [INFO] consul: adding LAN server s2 (Addr: 127.0.0.3:15608) (DC: dc2)
2016/03/17 06:43:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:51 [INFO] raft: Node at 127.0.0.1:15612 [Candidate] entering Candidate state
2016/03/17 06:43:51 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:51 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: Node 15611
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/17 06:43:51 [INFO] consul: shutting down server
2016/03/17 06:43:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:51 [DEBUG] raft: Node 127.0.0.1:15608 updated peer set (2): [127.0.0.1:15608]
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/17 06:43:51 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:51 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:51 [WARN] memberlist: Refuting a suspect message (from: Node 15603.dc1)
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/17 06:43:51 [DEBUG] serf: messageJoinType: s2.dc2
2016/03/17 06:43:51 [DEBUG] memberlist: Failed UDP ping: Node 15611 (timeout reached)
2016/03/17 06:43:51 [INFO] memberlist: Suspect Node 15611 has failed, no acks received
2016/03/17 06:43:51 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:51 [INFO] consul: member 's2' joined, marking health alive
2016/03/17 06:43:51 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:51 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:51 [DEBUG] memberlist: Failed UDP ping: Node 15611 (timeout reached)
2016/03/17 06:43:51 [ERR] consul: 'Node 15611' and 's2' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:51 [INFO] consul: member 'Node 15611' joined, marking health alive
2016/03/17 06:43:51 [INFO] memberlist: Suspect Node 15611 has failed, no acks received
2016/03/17 06:43:51 [INFO] memberlist: Marking Node 15611 as failed, suspect timeout reached
2016/03/17 06:43:51 [INFO] serf: EventMemberFailed: Node 15611 127.0.0.1
2016/03/17 06:43:51 [INFO] consul: removing LAN server Node 15611 (Addr: 127.0.0.1:15612) (DC: dc2)
2016/03/17 06:43:51 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:51 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:51 [WARN] memberlist: Refuting a suspect message (from: Node 15603.dc1)
2016/03/17 06:43:52 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:52 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:52 [ERR] consul: 'Node 15611' and 's2' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/03/17 06:43:52 [INFO] consul: member 'Node 15611' failed, marking health critical
2016/03/17 06:43:52 [INFO] consul: shutting down server
2016/03/17 06:43:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:52 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:52 [INFO] memberlist: Marking s2.dc2 as failed, suspect timeout reached
2016/03/17 06:43:52 [INFO] serf: EventMemberFailed: s2.dc2 127.0.0.2
2016/03/17 06:43:52 [INFO] consul: removing WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/17 06:43:52 [INFO] serf: EventMemberJoin: s2.dc2 127.0.0.2
2016/03/17 06:43:52 [INFO] consul: adding WAN server s2.dc2 (Addr: 127.0.0.2:15608) (DC: dc2)
2016/03/17 06:43:52 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:52 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:52 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:52 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/03/17 06:43:52 [ERR] consul: failed to reconcile member: {Node 15611 127.0.0.1 15613 map[port:15612 bootstrap:1 role:consul dc:dc2 vsn:2 vsn_min:1 vsn_max:3 build:] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:43:52 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/03/17 06:43:52 [INFO] consul: shutting down server
2016/03/17 06:43:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:52 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:52 [DEBUG] memberlist: Failed UDP ping: s2.dc2 (timeout reached)
2016/03/17 06:43:52 [INFO] memberlist: Suspect s2.dc2 has failed, no acks received
2016/03/17 06:43:52 [WARN] serf: Shutdown without a Leave
2016/03/17 06:43:52 [INFO] memberlist: Marking s2.dc2 as failed, suspect timeout reached
2016/03/17 06:43:52 [INFO] serf: EventMemberFailed: s2.dc2 127.0.0.2
--- PASS: TestServer_JoinSeparateLanAndWanAddresses (3.26s)
=== RUN   TestServer_LeaveLeader
2016/03/17 06:43:53 [INFO] raft: Node at 127.0.0.1:15616 [Follower] entering Follower state
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15615 127.0.0.1
2016/03/17 06:43:53 [INFO] consul: adding LAN server Node 15615 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15615.dc1 127.0.0.1
2016/03/17 06:43:53 [INFO] consul: adding WAN server Node 15615.dc1 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/17 06:43:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:53 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:53 [INFO] raft: Node at 127.0.0.1:15620 [Follower] entering Follower state
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15619 127.0.0.1
2016/03/17 06:43:53 [INFO] consul: adding LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15619.dc1 127.0.0.1
2016/03/17 06:43:53 [DEBUG] memberlist: TCP connection from=127.0.0.1:50673
2016/03/17 06:43:53 [INFO] consul: adding WAN server Node 15619.dc1 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/17 06:43:53 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15617
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15619 127.0.0.1
2016/03/17 06:43:53 [INFO] consul: adding LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/17 06:43:53 [INFO] serf: EventMemberJoin: Node 15615 127.0.0.1
2016/03/17 06:43:53 [INFO] consul: adding LAN server Node 15615 (Addr: 127.0.0.1:15616) (DC: dc1)
2016/03/17 06:43:53 [DEBUG] raft: Votes needed: 1
2016/03/17 06:43:53 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:53 [INFO] raft: Election won. Tally: 1
2016/03/17 06:43:53 [INFO] raft: Node at 127.0.0.1:15616 [Leader] entering Leader state
2016/03/17 06:43:53 [INFO] consul: cluster leadership acquired
2016/03/17 06:43:53 [INFO] consul: New leader elected: Node 15615
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [INFO] consul: New leader elected: Node 15615
2016/03/17 06:43:53 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:53 [DEBUG] serf: messageJoinType: Node 15619
2016/03/17 06:43:54 [DEBUG] raft: Node 127.0.0.1:15616 updated peer set (2): [127.0.0.1:15616]
2016/03/17 06:43:54 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:43:54 [INFO] consul: member 'Node 15615' joined, marking health alive
2016/03/17 06:43:54 [DEBUG] raft: Node 127.0.0.1:15616 updated peer set (2): [127.0.0.1:15620 127.0.0.1:15616]
2016/03/17 06:43:54 [INFO] raft: Added peer 127.0.0.1:15620, starting replication
2016/03/17 06:43:54 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:35138
2016/03/17 06:43:54 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:35139
2016/03/17 06:43:54 [DEBUG] raft: Failed to contact 127.0.0.1:15620 in 280.766ms
2016/03/17 06:43:54 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:43:54 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:43:54 [INFO] raft: Node at 127.0.0.1:15616 [Follower] entering Follower state
2016/03/17 06:43:54 [INFO] consul: cluster leadership lost
2016/03/17 06:43:54 [ERR] consul: failed to reconcile member: {Node 15619 127.0.0.1 15621 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15620 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:43:54 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:43:54 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:43:54 [WARN] raft: AppendEntries to 127.0.0.1:15620 rejected, sending older logs (next: 1)
2016/03/17 06:43:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:54 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:54 [DEBUG] raft-net: 127.0.0.1:15620 accepted connection from: 127.0.0.1:35140
2016/03/17 06:43:55 [DEBUG] raft: Node 127.0.0.1:15620 updated peer set (2): [127.0.0.1:15616]
2016/03/17 06:43:55 [WARN] raft: Rejecting vote from 127.0.0.1:15616 since we have a leader: 127.0.0.1:15616
2016/03/17 06:43:55 [INFO] raft: pipelining replication to peer 127.0.0.1:15620
2016/03/17 06:43:55 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15620
2016/03/17 06:43:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:55 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/17 06:43:55 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:55 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:55 [DEBUG] raft-net: 127.0.0.1:15616 accepted connection from: 127.0.0.1:46470
2016/03/17 06:43:55 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/17 06:43:55 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:43:55 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:56 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:56 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:56 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:56 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:43:56 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:56 [INFO] raft: Node at 127.0.0.1:15620 [Follower] entering Follower state
2016/03/17 06:43:56 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:56 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:56 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:43:56 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:43:57 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/17 06:43:57 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:57 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:43:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:57 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:43:57 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:43:57 [DEBUG] raft: Votes needed: 2
2016/03/17 06:43:57 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:43:57 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:43:57 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:43:57 [INFO] raft: Node at 127.0.0.1:15620 [Candidate] entering Candidate state
2016/03/17 06:44:03 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:03 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:03 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/17 06:44:03 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:03 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:44:03 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:03 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/17 06:44:04 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:04 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:04 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:04 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:44:04 [INFO] raft: Node at 127.0.0.1:15620 [Follower] entering Follower state
2016/03/17 06:44:04 [INFO] consul: shutting down server
2016/03/17 06:44:04 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:04 [DEBUG] memberlist: Failed UDP ping: Node 15619 (timeout reached)
2016/03/17 06:44:04 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:04 [INFO] memberlist: Suspect Node 15619 has failed, no acks received
2016/03/17 06:44:04 [DEBUG] memberlist: Failed UDP ping: Node 15619 (timeout reached)
2016/03/17 06:44:04 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15620: EOF
2016/03/17 06:44:04 [INFO] memberlist: Suspect Node 15619 has failed, no acks received
2016/03/17 06:44:05 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:05 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:05 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:05 [INFO] raft: Node at 127.0.0.1:15616 [Candidate] entering Candidate state
2016/03/17 06:44:05 [INFO] memberlist: Marking Node 15619 as failed, suspect timeout reached
2016/03/17 06:44:05 [INFO] serf: EventMemberFailed: Node 15619 127.0.0.1
2016/03/17 06:44:05 [INFO] consul: removing LAN server Node 15619 (Addr: 127.0.0.1:15620) (DC: dc1)
2016/03/17 06:44:05 [DEBUG] memberlist: Failed UDP ping: Node 15619 (timeout reached)
2016/03/17 06:44:05 [INFO] memberlist: Suspect Node 15619 has failed, no acks received
2016/03/17 06:44:05 [INFO] consul: shutting down server
2016/03/17 06:44:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15620: EOF
2016/03/17 06:44:05 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:05 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_LeaveLeader (13.18s)
	server_test.go:353: should have 2 peers: [127.0.0.1:15620 127.0.0.1:15616]
=== RUN   TestServer_Leave
2016/03/17 06:44:06 [INFO] raft: Node at 127.0.0.1:15624 [Follower] entering Follower state
2016/03/17 06:44:06 [INFO] serf: EventMemberJoin: Node 15623 127.0.0.1
2016/03/17 06:44:06 [INFO] consul: adding LAN server Node 15623 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/17 06:44:06 [INFO] serf: EventMemberJoin: Node 15623.dc1 127.0.0.1
2016/03/17 06:44:06 [INFO] consul: adding WAN server Node 15623.dc1 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/17 06:44:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:06 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:07 [INFO] raft: Node at 127.0.0.1:15628 [Follower] entering Follower state
2016/03/17 06:44:07 [INFO] serf: EventMemberJoin: Node 15627 127.0.0.1
2016/03/17 06:44:07 [INFO] consul: adding LAN server Node 15627 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/17 06:44:07 [INFO] serf: EventMemberJoin: Node 15627.dc1 127.0.0.1
2016/03/17 06:44:07 [INFO] consul: adding WAN server Node 15627.dc1 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/17 06:44:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:33413
2016/03/17 06:44:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15625
2016/03/17 06:44:07 [INFO] serf: EventMemberJoin: Node 15627 127.0.0.1
2016/03/17 06:44:07 [INFO] serf: EventMemberJoin: Node 15623 127.0.0.1
2016/03/17 06:44:07 [INFO] consul: adding LAN server Node 15627 (Addr: 127.0.0.1:15628) (DC: dc1)
2016/03/17 06:44:07 [INFO] consul: adding LAN server Node 15623 (Addr: 127.0.0.1:15624) (DC: dc1)
2016/03/17 06:44:07 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:07 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:07 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:07 [INFO] raft: Node at 127.0.0.1:15624 [Leader] entering Leader state
2016/03/17 06:44:07 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:07 [INFO] consul: New leader elected: Node 15623
2016/03/17 06:44:07 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [INFO] consul: New leader elected: Node 15623
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:07 [DEBUG] serf: messageJoinType: Node 15627
2016/03/17 06:44:07 [DEBUG] raft: Node 127.0.0.1:15624 updated peer set (2): [127.0.0.1:15624]
2016/03/17 06:44:07 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:07 [INFO] consul: member 'Node 15623' joined, marking health alive
2016/03/17 06:44:07 [DEBUG] raft: Node 127.0.0.1:15624 updated peer set (2): [127.0.0.1:15628 127.0.0.1:15624]
2016/03/17 06:44:07 [INFO] raft: Added peer 127.0.0.1:15628, starting replication
2016/03/17 06:44:07 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:58764
2016/03/17 06:44:07 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:58765
2016/03/17 06:44:07 [DEBUG] raft: Failed to contact 127.0.0.1:15628 in 232.865667ms
2016/03/17 06:44:07 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:44:07 [INFO] raft: Node at 127.0.0.1:15624 [Follower] entering Follower state
2016/03/17 06:44:07 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:44:07 [INFO] consul: cluster leadership lost
2016/03/17 06:44:07 [ERR] consul: failed to reconcile member: {Node 15627 127.0.0.1 15629 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15628 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:44:07 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:44:08 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:44:08 [WARN] raft: AppendEntries to 127.0.0.1:15628 rejected, sending older logs (next: 1)
2016/03/17 06:44:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:08 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:08 [DEBUG] raft-net: 127.0.0.1:15628 accepted connection from: 127.0.0.1:58766
2016/03/17 06:44:08 [DEBUG] raft: Node 127.0.0.1:15628 updated peer set (2): [127.0.0.1:15624]
2016/03/17 06:44:08 [WARN] raft: Rejecting vote from 127.0.0.1:15624 since we have a leader: 127.0.0.1:15624
2016/03/17 06:44:08 [INFO] raft: pipelining replication to peer 127.0.0.1:15628
2016/03/17 06:44:08 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15628
2016/03/17 06:44:08 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:08 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:09 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:09 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:09 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:09 [DEBUG] raft-net: 127.0.0.1:15624 accepted connection from: 127.0.0.1:54320
2016/03/17 06:44:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:09 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:09 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:10 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:10 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:10 [INFO] raft: Node at 127.0.0.1:15628 [Follower] entering Follower state
2016/03/17 06:44:10 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:10 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:10 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:10 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:11 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:11 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:11 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:11 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:12 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:12 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:12 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:12 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:13 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:13 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:13 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:44:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:13 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:13 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:13 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:13 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:44:13 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:14 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:14 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:14 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:14 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:14 [INFO] raft: Node at 127.0.0.1:15628 [Follower] entering Follower state
2016/03/17 06:44:15 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:15 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:15 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:15 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:15 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:16 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:16 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:16 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:44:16 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:16 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:16 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:16 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:44:16 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:16 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:16 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:17 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:44:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:17 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:17 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:44:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:17 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:17 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:44:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:17 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:17 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:17 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:44:17 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:17 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:17 [INFO] raft: Node at 127.0.0.1:15628 [Candidate] entering Candidate state
2016/03/17 06:44:18 [INFO] consul: shutting down server
2016/03/17 06:44:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:18 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:18 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15628: EOF
2016/03/17 06:44:18 [DEBUG] memberlist: Failed UDP ping: Node 15627 (timeout reached)
2016/03/17 06:44:18 [INFO] memberlist: Suspect Node 15627 has failed, no acks received
2016/03/17 06:44:18 [DEBUG] memberlist: Failed UDP ping: Node 15627 (timeout reached)
2016/03/17 06:44:18 [INFO] memberlist: Suspect Node 15627 has failed, no acks received
2016/03/17 06:44:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:18 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:18 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:44:18 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:18 [INFO] consul: shutting down server
2016/03/17 06:44:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:18 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:18 [INFO] raft: Node at 127.0.0.1:15624 [Candidate] entering Candidate state
2016/03/17 06:44:18 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:18 [INFO] memberlist: Marking Node 15627 as failed, suspect timeout reached
2016/03/17 06:44:18 [INFO] serf: EventMemberFailed: Node 15627 127.0.0.1
2016/03/17 06:44:18 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:18 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15628: EOF
2016/03/17 06:44:19 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_Leave (13.30s)
	server_test.go:408: should have 2 peers: [127.0.0.1:15628 127.0.0.1:15624]
=== RUN   TestServer_RPC
2016/03/17 06:44:19 [INFO] raft: Node at 127.0.0.1:15632 [Follower] entering Follower state
2016/03/17 06:44:19 [INFO] serf: EventMemberJoin: Node 15631 127.0.0.1
2016/03/17 06:44:19 [INFO] consul: adding LAN server Node 15631 (Addr: 127.0.0.1:15632) (DC: dc1)
2016/03/17 06:44:19 [INFO] serf: EventMemberJoin: Node 15631.dc1 127.0.0.1
2016/03/17 06:44:19 [INFO] consul: shutting down server
2016/03/17 06:44:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:19 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:19 [INFO] raft: Node at 127.0.0.1:15632 [Candidate] entering Candidate state
2016/03/17 06:44:20 [DEBUG] raft: Votes needed: 1
--- PASS: TestServer_RPC (1.13s)
=== RUN   TestServer_JoinLAN_TLS
2016/03/17 06:44:20 [INFO] raft: Node at 127.0.0.1:15635 [Follower] entering Follower state
2016/03/17 06:44:20 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/17 06:44:20 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/17 06:44:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:20 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/17 06:44:20 [INFO] serf: EventMemberJoin: a.testco.internal.dc1 127.0.0.1
2016/03/17 06:44:20 [INFO] consul: adding WAN server a.testco.internal.dc1 (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/17 06:44:21 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:21 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:21 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:21 [INFO] raft: Node at 127.0.0.1:15635 [Leader] entering Leader state
2016/03/17 06:44:21 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:21 [INFO] consul: New leader elected: a.testco.internal
2016/03/17 06:44:21 [INFO] raft: Node at 127.0.0.1:15638 [Follower] entering Follower state
2016/03/17 06:44:21 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/17 06:44:21 [INFO] consul: adding LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/17 06:44:21 [INFO] serf: EventMemberJoin: b.testco.internal.dc1 127.0.0.1
2016/03/17 06:44:21 [INFO] consul: adding WAN server b.testco.internal.dc1 (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/17 06:44:21 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15636
2016/03/17 06:44:21 [DEBUG] memberlist: TCP connection from=127.0.0.1:40568
2016/03/17 06:44:21 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/03/17 06:44:21 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/03/17 06:44:21 [INFO] consul: adding LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/17 06:44:21 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15635) (DC: dc1)
2016/03/17 06:44:21 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:21 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:21 [DEBUG] raft: Node 127.0.0.1:15635 updated peer set (2): [127.0.0.1:15635]
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:21 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/03/17 06:44:22 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:22 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:22 [INFO] consul: member 'a.testco.internal' joined, marking health alive
2016/03/17 06:44:22 [DEBUG] serf: messageJoinType: b.testco.internal
2016/03/17 06:44:22 [DEBUG] raft: Node 127.0.0.1:15635 updated peer set (2): [127.0.0.1:15638 127.0.0.1:15635]
2016/03/17 06:44:22 [INFO] raft: Added peer 127.0.0.1:15638, starting replication
2016/03/17 06:44:22 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:53210
2016/03/17 06:44:22 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:53211
2016/03/17 06:44:22 [DEBUG] raft: Failed to contact 127.0.0.1:15638 in 189.038333ms
2016/03/17 06:44:22 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/03/17 06:44:22 [INFO] raft: Node at 127.0.0.1:15635 [Follower] entering Follower state
2016/03/17 06:44:22 [INFO] consul: cluster leadership lost
2016/03/17 06:44:22 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/03/17 06:44:22 [ERR] consul: failed to reconcile member: {b.testco.internal 127.0.0.1 15639 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15638 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/03/17 06:44:22 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/03/17 06:44:22 [ERR] consul: failed to wait for barrier: node is not the leader
2016/03/17 06:44:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:22 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/17 06:44:22 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/03/17 06:44:22 [WARN] raft: AppendEntries to 127.0.0.1:15638 rejected, sending older logs (next: 1)
2016/03/17 06:44:22 [DEBUG] raft-net: 127.0.0.1:15638 accepted connection from: 127.0.0.1:53212
2016/03/17 06:44:23 [DEBUG] raft: Node 127.0.0.1:15638 updated peer set (2): [127.0.0.1:15635]
2016/03/17 06:44:23 [WARN] raft: Rejecting vote from 127.0.0.1:15635 since we have a leader: 127.0.0.1:15635
2016/03/17 06:44:23 [INFO] raft: pipelining replication to peer 127.0.0.1:15638
2016/03/17 06:44:23 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15638
2016/03/17 06:44:23 [INFO] consul: shutting down server
2016/03/17 06:44:23 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:23 [INFO] raft: Node at 127.0.0.1:15638 [Candidate] entering Candidate state
2016/03/17 06:44:23 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:23 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/03/17 06:44:23 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:23 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:23 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:23 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/17 06:44:23 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/03/17 06:44:23 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/03/17 06:44:23 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/03/17 06:44:23 [DEBUG] raft-net: 127.0.0.1:15635 accepted connection from: 127.0.0.1:38798
2016/03/17 06:44:23 [INFO] memberlist: Marking b.testco.internal as failed, suspect timeout reached
2016/03/17 06:44:23 [INFO] serf: EventMemberFailed: b.testco.internal 127.0.0.1
2016/03/17 06:44:23 [INFO] consul: removing LAN server b.testco.internal (Addr: 127.0.0.1:15638) (DC: dc1)
2016/03/17 06:44:23 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/03/17 06:44:23 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:23 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15638: EOF
2016/03/17 06:44:23 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/03/17 06:44:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:24 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:24 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:24 [INFO] consul: shutting down server
2016/03/17 06:44:24 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:24 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:24 [INFO] raft: Node at 127.0.0.1:15635 [Candidate] entering Candidate state
2016/03/17 06:44:24 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:24 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:24 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15638: EOF
2016/03/17 06:44:24 [DEBUG] raft: Votes needed: 2
--- PASS: TestServer_JoinLAN_TLS (4.71s)
=== RUN   TestServer_Expect
2016/03/17 06:44:25 [INFO] raft: Node at 127.0.0.1:15642 [Follower] entering Follower state
2016/03/17 06:44:25 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/17 06:44:25 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/17 06:44:25 [INFO] serf: EventMemberJoin: Node 15641.dc1 127.0.0.1
2016/03/17 06:44:25 [INFO] consul: adding WAN server Node 15641.dc1 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/17 06:44:25 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:25 [INFO] raft: Node at 127.0.0.1:15646 [Follower] entering Follower state
2016/03/17 06:44:25 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/17 06:44:25 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/17 06:44:25 [INFO] serf: EventMemberJoin: Node 15645.dc1 127.0.0.1
2016/03/17 06:44:25 [INFO] consul: adding WAN server Node 15645.dc1 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/17 06:44:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:26 [INFO] raft: Node at 127.0.0.1:15650 [Follower] entering Follower state
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15649.dc1 127.0.0.1
2016/03/17 06:44:26 [INFO] consul: adding WAN server Node 15649.dc1 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:26 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15643
2016/03/17 06:44:26 [DEBUG] memberlist: TCP connection from=127.0.0.1:33360
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/17 06:44:26 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15643
2016/03/17 06:44:26 [DEBUG] memberlist: TCP connection from=127.0.0.1:33361
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15645 127.0.0.1
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15641 127.0.0.1
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:26 [INFO] consul: Attempting bootstrap with nodes: [127.0.0.1:15642 127.0.0.1:15646 127.0.0.1:15650]
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15641 (Addr: 127.0.0.1:15642) (DC: dc1)
2016/03/17 06:44:26 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [INFO] serf: EventMemberJoin: Node 15649 127.0.0.1
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [INFO] consul: adding LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:26 [INFO] consul: Attempting bootstrap with nodes: [127.0.0.1:15646 127.0.0.1:15642 127.0.0.1:15650]
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:26 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:26 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15645
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:54142
2016/03/17 06:44:26 [DEBUG] raft-net: 127.0.0.1:15646 accepted connection from: 127.0.0.1:38281
2016/03/17 06:44:26 [DEBUG] serf: messageJoinType: Node 15649
2016/03/17 06:44:26 [DEBUG] raft-net: 127.0.0.1:15642 accepted connection from: 127.0.0.1:40080
2016/03/17 06:44:26 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:54144
2016/03/17 06:44:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/17 06:44:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:27 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/17 06:44:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:27 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 1
2016/03/17 06:44:27 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:27 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/17 06:44:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:27 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:27 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:27 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/17 06:44:27 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:27 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:27 [INFO] raft: Duplicate RequestVote for same term: 2
2016/03/17 06:44:27 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:27 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:28 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:28 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/17 06:44:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:28 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:28 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:28 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:28 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/17 06:44:28 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:28 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:28 [INFO] raft: Duplicate RequestVote for same term: 3
2016/03/17 06:44:28 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:28 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:54146
2016/03/17 06:44:28 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:28 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:28 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/17 06:44:28 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:28 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:28 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:28 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/17 06:44:28 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:28 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:28 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:29 [INFO] raft: Duplicate RequestVote for same term: 4
2016/03/17 06:44:29 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:29 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:29 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:29 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:29 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:29 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:29 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:44:29 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:44:29 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:29 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:29 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:29 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:29 [INFO] raft: Duplicate RequestVote for same term: 5
2016/03/17 06:44:29 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:29 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:30 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/17 06:44:30 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/17 06:44:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:30 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:30 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:30 [INFO] raft: Duplicate RequestVote for same term: 6
2016/03/17 06:44:30 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:30 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:30 [DEBUG] raft-net: 127.0.0.1:15650 accepted connection from: 127.0.0.1:54147
2016/03/17 06:44:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:30 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:44:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:30 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:30 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:30 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:44:30 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:30 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:30 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:31 [INFO] raft: Duplicate RequestVote for same term: 7
2016/03/17 06:44:31 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:31 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:31 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:31 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:44:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:31 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:31 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:31 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:31 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:31 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:44:31 [INFO] raft: Duplicate RequestVote for same term: 8
2016/03/17 06:44:31 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:31 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:31 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:31 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:32 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:44:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:32 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:32 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:32 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:44:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:32 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:32 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:32 [INFO] raft: Duplicate RequestVote for same term: 9
2016/03/17 06:44:32 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:32 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:32 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:44:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:32 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:32 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:32 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:32 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:32 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:44:33 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:33 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:34 [INFO] raft: Duplicate RequestVote for same term: 10
2016/03/17 06:44:34 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:34 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:34 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:34 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:44:34 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:34 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:34 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:34 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:44:34 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:34 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/03/17 06:44:34 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:34 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:35 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:35 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:44:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:35 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:35 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:35 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:35 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:44:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:35 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:35 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:35 [INFO] raft: Duplicate RequestVote for same term: 12
2016/03/17 06:44:35 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:35 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:35 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:35 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:44:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:35 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:35 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:35 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:35 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:44:35 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:35 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:36 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:36 [INFO] raft: Duplicate RequestVote for same term: 13
2016/03/17 06:44:36 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:36 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:36 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:36 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:36 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/17 06:44:36 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:36 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:36 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:36 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/17 06:44:36 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:36 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:36 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:36 [INFO] raft: Duplicate RequestVote for same term: 14
2016/03/17 06:44:36 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15642 as a peer
2016/03/17 06:44:36 [WARN] raft: Remote peer 127.0.0.1:15650 does not have local node 127.0.0.1:15646 as a peer
2016/03/17 06:44:36 [INFO] consul: shutting down server
2016/03/17 06:44:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:36 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/17 06:44:36 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/17 06:44:36 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/17 06:44:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/17 06:44:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/17 06:44:37 [INFO] memberlist: Marking Node 15649 as failed, suspect timeout reached
2016/03/17 06:44:37 [INFO] serf: EventMemberFailed: Node 15649 127.0.0.1
2016/03/17 06:44:37 [INFO] consul: removing LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:37 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:37 [INFO] raft: Duplicate RequestVote for same term: 15
2016/03/17 06:44:37 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15649 (timeout reached)
2016/03/17 06:44:37 [INFO] serf: EventMemberFailed: Node 15649 127.0.0.1
2016/03/17 06:44:37 [INFO] consul: removing LAN server Node 15649 (Addr: 127.0.0.1:15650) (DC: dc1)
2016/03/17 06:44:37 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:37 [INFO] raft: Node at 127.0.0.1:15646 [Candidate] entering Candidate state
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15649 has failed, no acks received
2016/03/17 06:44:37 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:37 [INFO] raft: Duplicate RequestVote for same term: 15
2016/03/17 06:44:37 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:37 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:37 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/17 06:44:37 [INFO] consul: shutting down server
2016/03/17 06:44:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15645 (timeout reached)
2016/03/17 06:44:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: EOF
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15645 has failed, no acks received
2016/03/17 06:44:37 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/03/17 06:44:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15646: EOF
2016/03/17 06:44:37 [DEBUG] memberlist: Failed UDP ping: Node 15645 (timeout reached)
2016/03/17 06:44:37 [INFO] memberlist: Marking Node 15645 as failed, suspect timeout reached
2016/03/17 06:44:37 [INFO] serf: EventMemberFailed: Node 15645 127.0.0.1
2016/03/17 06:44:37 [INFO] consul: removing LAN server Node 15645 (Addr: 127.0.0.1:15646) (DC: dc1)
2016/03/17 06:44:37 [INFO] memberlist: Suspect Node 15645 has failed, no acks received
2016/03/17 06:44:38 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:38 [DEBUG] raft: Votes needed: 2
2016/03/17 06:44:38 [INFO] raft: Duplicate RequestVote for same term: 16
2016/03/17 06:44:38 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:38 [INFO] consul: shutting down server
2016/03/17 06:44:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:38 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:38 [WARN] raft: Election timeout reached, restarting election
2016/03/17 06:44:38 [INFO] raft: Node at 127.0.0.1:15642 [Candidate] entering Candidate state
2016/03/17 06:44:38 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15646: dial tcp 127.0.0.1:15646: getsockopt: connection refused
2016/03/17 06:44:38 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15650: dial tcp 127.0.0.1:15650: getsockopt: connection refused
2016/03/17 06:44:38 [DEBUG] raft: Votes needed: 2
--- FAIL: TestServer_Expect (13.84s)
	server_test.go:566: should have 3 peers: []
=== RUN   TestServer_BadExpect
2016/03/17 06:44:39 [INFO] raft: Node at 127.0.0.1:15654 [Follower] entering Follower state
2016/03/17 06:44:39 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/17 06:44:39 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/17 06:44:39 [INFO] serf: EventMemberJoin: Node 15653.dc1 127.0.0.1
2016/03/17 06:44:39 [INFO] consul: adding WAN server Node 15653.dc1 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/17 06:44:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:39 [INFO] raft: Node at 127.0.0.1:15658 [Follower] entering Follower state
2016/03/17 06:44:39 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/17 06:44:39 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/17 06:44:39 [INFO] serf: EventMemberJoin: Node 15657.dc1 127.0.0.1
2016/03/17 06:44:39 [INFO] consul: adding WAN server Node 15657.dc1 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/17 06:44:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:40 [INFO] raft: Node at 127.0.0.1:15662 [Follower] entering Follower state
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15661.dc1 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding WAN server Node 15661.dc1 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/17 06:44:40 [DEBUG] memberlist: TCP connection from=127.0.0.1:43900
2016/03/17 06:44:40 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15655
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15658 expect:2] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15653 127.0.0.1 15655 map[expect:3 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15654] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15655
2016/03/17 06:44:40 [DEBUG] memberlist: TCP connection from=127.0.0.1:43901
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15658 expect:2 role:consul dc:dc1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15657 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15657 (Addr: 127.0.0.1:15658) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[vsn_max:3 build: port:15658 expect:2 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15653 127.0.0.1
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15653 (Addr: 127.0.0.1:15654) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15657 127.0.0.1 15659 map[port:15658 expect:2 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [INFO] serf: EventMemberJoin: Node 15661 127.0.0.1
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [INFO] consul: adding LAN server Node 15661 (Addr: 127.0.0.1:15662) (DC: dc1)
2016/03/17 06:44:40 [ERR] consul: Member {Node 15653 127.0.0.1 15655 map[vsn_max:3 build: port:15654 expect:3 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/03/17 06:44:40 [DEBUG] serf: messageJoinType: Node 15657
2016/03/17 06:44:40 [INFO] consul: shutting down server
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [DEBUG] memberlist: Failed UDP ping: Node 15661 (timeout reached)
2016/03/17 06:44:40 [INFO] consul: shutting down server
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [INFO] memberlist: Suspect Node 15661 has failed, no acks received
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [DEBUG] memberlist: Failed UDP ping: Node 15661 (timeout reached)
2016/03/17 06:44:40 [INFO] consul: shutting down server
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [INFO] memberlist: Suspect Node 15661 has failed, no acks received
2016/03/17 06:44:40 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:40 [INFO] memberlist: Marking Node 15661 as failed, suspect timeout reached
2016/03/17 06:44:40 [INFO] serf: EventMemberFailed: Node 15661 127.0.0.1
--- PASS: TestServer_BadExpect (1.83s)
=== RUN   TestServer_globalRPCErrors
2016/03/17 06:44:40 [INFO] memberlist: Marking Node 15661 as failed, suspect timeout reached
2016/03/17 06:44:40 [INFO] serf: EventMemberFailed: Node 15661 127.0.0.1
2016/03/17 06:44:41 [INFO] raft: Node at 127.0.0.1:15666 [Follower] entering Follower state
2016/03/17 06:44:41 [INFO] serf: EventMemberJoin: Node 15665 127.0.0.1
2016/03/17 06:44:41 [INFO] consul: adding LAN server Node 15665 (Addr: 127.0.0.1:15666) (DC: dc1)
2016/03/17 06:44:41 [INFO] serf: EventMemberJoin: Node 15665.dc1 127.0.0.1
2016/03/17 06:44:41 [INFO] consul: adding WAN server Node 15665.dc1 (Addr: 127.0.0.1:15666) (DC: dc1)
2016/03/17 06:44:41 [ERR] consul.rpc: RPC error: rpc: can't find service Bad.Method from=127.0.0.1:37908
2016/03/17 06:44:41 [INFO] consul: shutting down server
2016/03/17 06:44:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:41 [INFO] raft: Node at 127.0.0.1:15666 [Candidate] entering Candidate state
2016/03/17 06:44:41 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:41 [DEBUG] raft: Votes needed: 1
--- PASS: TestServer_globalRPCErrors (1.07s)
=== RUN   TestServer_Encrypted
2016/03/17 06:44:42 [INFO] raft: Node at 127.0.0.1:15670 [Follower] entering Follower state
2016/03/17 06:44:42 [INFO] serf: EventMemberJoin: Node 15669 127.0.0.1
2016/03/17 06:44:42 [INFO] consul: adding LAN server Node 15669 (Addr: 127.0.0.1:15670) (DC: dc1)
2016/03/17 06:44:42 [INFO] serf: EventMemberJoin: Node 15669.dc1 127.0.0.1
2016/03/17 06:44:42 [INFO] consul: adding WAN server Node 15669.dc1 (Addr: 127.0.0.1:15670) (DC: dc1)
2016/03/17 06:44:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:42 [INFO] raft: Node at 127.0.0.1:15670 [Candidate] entering Candidate state
2016/03/17 06:44:42 [INFO] raft: Node at 127.0.0.1:15674 [Follower] entering Follower state
2016/03/17 06:44:42 [INFO] serf: EventMemberJoin: Node 15673 127.0.0.1
2016/03/17 06:44:42 [INFO] consul: adding LAN server Node 15673 (Addr: 127.0.0.1:15674) (DC: dc1)
2016/03/17 06:44:42 [INFO] serf: EventMemberJoin: Node 15673.dc1 127.0.0.1
2016/03/17 06:44:42 [INFO] consul: shutting down server
2016/03/17 06:44:42 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:42 [INFO] consul: adding WAN server Node 15673.dc1 (Addr: 127.0.0.1:15674) (DC: dc1)
2016/03/17 06:44:42 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:42 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:42 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:42 [INFO] raft: Node at 127.0.0.1:15670 [Leader] entering Leader state
2016/03/17 06:44:42 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:42 [INFO] consul: New leader elected: Node 15669
2016/03/17 06:44:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:42 [INFO] raft: Node at 127.0.0.1:15674 [Candidate] entering Candidate state
2016/03/17 06:44:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:43 [DEBUG] raft: Node 127.0.0.1:15670 updated peer set (2): [127.0.0.1:15670]
2016/03/17 06:44:43 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:43 [INFO] consul: member 'Node 15669' joined, marking health alive
2016/03/17 06:44:43 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:43 [INFO] consul: shutting down server
2016/03/17 06:44:43 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:43 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_Encrypted (2.02s)
=== RUN   TestSessionEndpoint_Apply
2016/03/17 06:44:44 [INFO] raft: Node at 127.0.0.1:15678 [Follower] entering Follower state
2016/03/17 06:44:44 [INFO] serf: EventMemberJoin: Node 15677 127.0.0.1
2016/03/17 06:44:44 [INFO] consul: adding LAN server Node 15677 (Addr: 127.0.0.1:15678) (DC: dc1)
2016/03/17 06:44:44 [INFO] serf: EventMemberJoin: Node 15677.dc1 127.0.0.1
2016/03/17 06:44:44 [INFO] consul: adding WAN server Node 15677.dc1 (Addr: 127.0.0.1:15678) (DC: dc1)
2016/03/17 06:44:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:44 [INFO] raft: Node at 127.0.0.1:15678 [Candidate] entering Candidate state
2016/03/17 06:44:44 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:44 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:44 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:44 [INFO] raft: Node at 127.0.0.1:15678 [Leader] entering Leader state
2016/03/17 06:44:44 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:44 [INFO] consul: New leader elected: Node 15677
2016/03/17 06:44:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:45 [DEBUG] raft: Node 127.0.0.1:15678 updated peer set (2): [127.0.0.1:15678]
2016/03/17 06:44:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:45 [INFO] consul: member 'Node 15677' joined, marking health alive
2016/03/17 06:44:46 [INFO] consul: shutting down server
2016/03/17 06:44:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:46 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:44:46 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_Apply (2.86s)
=== RUN   TestSessionEndpoint_DeleteApply
2016/03/17 06:44:47 [INFO] raft: Node at 127.0.0.1:15682 [Follower] entering Follower state
2016/03/17 06:44:47 [INFO] serf: EventMemberJoin: Node 15681 127.0.0.1
2016/03/17 06:44:47 [INFO] consul: adding LAN server Node 15681 (Addr: 127.0.0.1:15682) (DC: dc1)
2016/03/17 06:44:47 [INFO] serf: EventMemberJoin: Node 15681.dc1 127.0.0.1
2016/03/17 06:44:47 [INFO] consul: adding WAN server Node 15681.dc1 (Addr: 127.0.0.1:15682) (DC: dc1)
2016/03/17 06:44:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:47 [INFO] raft: Node at 127.0.0.1:15682 [Candidate] entering Candidate state
2016/03/17 06:44:47 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:47 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:47 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:47 [INFO] raft: Node at 127.0.0.1:15682 [Leader] entering Leader state
2016/03/17 06:44:47 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:47 [INFO] consul: New leader elected: Node 15681
2016/03/17 06:44:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:47 [DEBUG] raft: Node 127.0.0.1:15682 updated peer set (2): [127.0.0.1:15682]
2016/03/17 06:44:47 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:47 [INFO] consul: member 'Node 15681' joined, marking health alive
2016/03/17 06:44:48 [INFO] consul: shutting down server
2016/03/17 06:44:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:48 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:48 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_DeleteApply (2.42s)
=== RUN   TestSessionEndpoint_Get
2016/03/17 06:44:49 [INFO] raft: Node at 127.0.0.1:15686 [Follower] entering Follower state
2016/03/17 06:44:49 [INFO] serf: EventMemberJoin: Node 15685 127.0.0.1
2016/03/17 06:44:49 [INFO] consul: adding LAN server Node 15685 (Addr: 127.0.0.1:15686) (DC: dc1)
2016/03/17 06:44:49 [INFO] serf: EventMemberJoin: Node 15685.dc1 127.0.0.1
2016/03/17 06:44:49 [INFO] consul: adding WAN server Node 15685.dc1 (Addr: 127.0.0.1:15686) (DC: dc1)
2016/03/17 06:44:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:49 [INFO] raft: Node at 127.0.0.1:15686 [Candidate] entering Candidate state
2016/03/17 06:44:49 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:49 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:49 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:49 [INFO] raft: Node at 127.0.0.1:15686 [Leader] entering Leader state
2016/03/17 06:44:49 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:49 [INFO] consul: New leader elected: Node 15685
2016/03/17 06:44:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:50 [DEBUG] raft: Node 127.0.0.1:15686 updated peer set (2): [127.0.0.1:15686]
2016/03/17 06:44:50 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:50 [INFO] consul: member 'Node 15685' joined, marking health alive
2016/03/17 06:44:50 [INFO] consul: shutting down server
2016/03/17 06:44:50 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:51 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:51 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestSessionEndpoint_Get (2.23s)
=== RUN   TestSessionEndpoint_List
2016/03/17 06:44:51 [INFO] raft: Node at 127.0.0.1:15690 [Follower] entering Follower state
2016/03/17 06:44:51 [INFO] serf: EventMemberJoin: Node 15689 127.0.0.1
2016/03/17 06:44:51 [INFO] consul: adding LAN server Node 15689 (Addr: 127.0.0.1:15690) (DC: dc1)
2016/03/17 06:44:51 [INFO] serf: EventMemberJoin: Node 15689.dc1 127.0.0.1
2016/03/17 06:44:51 [INFO] consul: adding WAN server Node 15689.dc1 (Addr: 127.0.0.1:15690) (DC: dc1)
2016/03/17 06:44:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:51 [INFO] raft: Node at 127.0.0.1:15690 [Candidate] entering Candidate state
2016/03/17 06:44:52 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:52 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:52 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:52 [INFO] raft: Node at 127.0.0.1:15690 [Leader] entering Leader state
2016/03/17 06:44:52 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:52 [INFO] consul: New leader elected: Node 15689
2016/03/17 06:44:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:52 [DEBUG] raft: Node 127.0.0.1:15690 updated peer set (2): [127.0.0.1:15690]
2016/03/17 06:44:52 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:52 [INFO] consul: member 'Node 15689' joined, marking health alive
2016/03/17 06:44:54 [INFO] consul: shutting down server
2016/03/17 06:44:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:54 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/03/17 06:44:54 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestSessionEndpoint_List (3.60s)
=== RUN   TestSessionEndpoint_ApplyTimers
2016/03/17 06:44:55 [INFO] raft: Node at 127.0.0.1:15694 [Follower] entering Follower state
2016/03/17 06:44:55 [INFO] serf: EventMemberJoin: Node 15693 127.0.0.1
2016/03/17 06:44:55 [INFO] consul: adding LAN server Node 15693 (Addr: 127.0.0.1:15694) (DC: dc1)
2016/03/17 06:44:55 [INFO] serf: EventMemberJoin: Node 15693.dc1 127.0.0.1
2016/03/17 06:44:55 [INFO] consul: adding WAN server Node 15693.dc1 (Addr: 127.0.0.1:15694) (DC: dc1)
2016/03/17 06:44:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:55 [INFO] raft: Node at 127.0.0.1:15694 [Candidate] entering Candidate state
2016/03/17 06:44:55 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:55 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:55 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:55 [INFO] raft: Node at 127.0.0.1:15694 [Leader] entering Leader state
2016/03/17 06:44:55 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:55 [INFO] consul: New leader elected: Node 15693
2016/03/17 06:44:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:56 [DEBUG] raft: Node 127.0.0.1:15694 updated peer set (2): [127.0.0.1:15694]
2016/03/17 06:44:56 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:56 [INFO] consul: member 'Node 15693' joined, marking health alive
2016/03/17 06:44:57 [INFO] consul: shutting down server
2016/03/17 06:44:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:57 [WARN] serf: Shutdown without a Leave
2016/03/17 06:44:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestSessionEndpoint_ApplyTimers (2.60s)
=== RUN   TestSessionEndpoint_Renew
2016/03/17 06:44:57 [INFO] raft: Node at 127.0.0.1:15698 [Follower] entering Follower state
2016/03/17 06:44:57 [INFO] serf: EventMemberJoin: Node 15697 127.0.0.1
2016/03/17 06:44:57 [INFO] consul: adding LAN server Node 15697 (Addr: 127.0.0.1:15698) (DC: dc1)
2016/03/17 06:44:57 [INFO] serf: EventMemberJoin: Node 15697.dc1 127.0.0.1
2016/03/17 06:44:57 [INFO] consul: adding WAN server Node 15697.dc1 (Addr: 127.0.0.1:15698) (DC: dc1)
2016/03/17 06:44:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:44:57 [INFO] raft: Node at 127.0.0.1:15698 [Candidate] entering Candidate state
2016/03/17 06:44:58 [DEBUG] raft: Votes needed: 1
2016/03/17 06:44:58 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:44:58 [INFO] raft: Election won. Tally: 1
2016/03/17 06:44:58 [INFO] raft: Node at 127.0.0.1:15698 [Leader] entering Leader state
2016/03/17 06:44:58 [INFO] consul: cluster leadership acquired
2016/03/17 06:44:58 [INFO] consul: New leader elected: Node 15697
2016/03/17 06:44:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:44:58 [DEBUG] raft: Node 127.0.0.1:15698 updated peer set (2): [127.0.0.1:15698]
2016/03/17 06:44:58 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:44:58 [INFO] consul: member 'Node 15697' joined, marking health alive
2016/03/17 06:45:20 [DEBUG] consul.state: Session 2cb1dec7-b403-3384-e4ca-0293bb67e75c TTL expired
2016/03/17 06:45:20 [DEBUG] consul.state: Session f9043507-6756-8e06-f42e-c456fcc571e5 TTL expired
2016/03/17 06:45:30 [DEBUG] consul.state: Session c78fb68d-2a6c-c626-b517-8c74fa0cf456 TTL expired
2016/03/17 06:45:30 [DEBUG] consul.state: Session 14044008-cd96-87cf-e79c-4bd6d108183e TTL expired
2016/03/17 06:45:30 [DEBUG] consul.state: Session df8e614c-0fd4-1d56-1bc2-a11f8018b111 TTL expired
2016/03/17 06:45:44 [INFO] consul: shutting down server
2016/03/17 06:45:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:45:44 [WARN] serf: Shutdown without a Leave
2016/03/17 06:45:44 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestSessionEndpoint_Renew (46.79s)
	session_endpoint_test.go:330: Created session '14044008-cd96-87cf-e79c-4bd6d108183e'
	session_endpoint_test.go:330: Created session '2cb1dec7-b403-3384-e4ca-0293bb67e75c'
	session_endpoint_test.go:330: Created session 'c78fb68d-2a6c-c626-b517-8c74fa0cf456'
	session_endpoint_test.go:330: Created session 'df8e614c-0fd4-1d56-1bc2-a11f8018b111'
	session_endpoint_test.go:330: Created session 'f9043507-6756-8e06-f42e-c456fcc571e5'
	session_endpoint_test.go:362: Renewed session 'c78fb68d-2a6c-c626-b517-8c74fa0cf456'
	session_endpoint_test.go:362: Renewed session '14044008-cd96-87cf-e79c-4bd6d108183e'
	session_endpoint_test.go:362: Renewed session 'df8e614c-0fd4-1d56-1bc2-a11f8018b111'
	session_endpoint_test.go:378: Expect 2 sessions to be destroyed
=== RUN   TestSessionEndpoint_NodeSessions
2016/03/17 06:45:44 [INFO] raft: Node at 127.0.0.1:15702 [Follower] entering Follower state
2016/03/17 06:45:44 [INFO] serf: EventMemberJoin: Node 15701 127.0.0.1
2016/03/17 06:45:44 [INFO] consul: adding LAN server Node 15701 (Addr: 127.0.0.1:15702) (DC: dc1)
2016/03/17 06:45:44 [INFO] serf: EventMemberJoin: Node 15701.dc1 127.0.0.1
2016/03/17 06:45:44 [INFO] consul: adding WAN server Node 15701.dc1 (Addr: 127.0.0.1:15702) (DC: dc1)
2016/03/17 06:45:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/03/17 06:45:44 [INFO] raft: Node at 127.0.0.1:15702 [Candidate] entering Candidate state
2016/03/17 06:45:45 [DEBUG] raft: Votes needed: 1
2016/03/17 06:45:45 [DEBUG] raft: Vote granted. Tally: 1
2016/03/17 06:45:45 [INFO] raft: Election won. Tally: 1
2016/03/17 06:45:45 [INFO] raft: Node at 127.0.0.1:15702 [Leader] entering Leader state
2016/03/17 06:45:45 [INFO] consul: cluster leadership acquired
2016/03/17 06:45:45 [INFO] consul: New leader elected: Node 15701
2016/03/17 06:45:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/03/17 06:45:45 [DEBUG] raft: Node 127.0.0.1:15702 updated peer set (2): [127.0.0.1:15702]
2016/03/17 06:45:45 [DEBUG] consul: reset tombstone GC to index 2
2016/03/17 06:45:45 [INFO] consul: member 'Node 15701' joined, marking health alive
SIGQUIT: quit
PC=0x71d64 m=0

goroutine 0 [idle]:
runtime.futex(0x88fac8, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1e9c8, 0x0, 0x0, 0x0, ...)
	/usr/lib/go/src/runtime/sys_linux_arm.s:246 +0x1c
runtime.futexsleep(0x88fac8, 0x0, 0xffffffff, 0xffffffff)
	/usr/lib/go/src/runtime/os1_linux.go:39 +0x68
runtime.notesleep(0x88fac8)
	/usr/lib/go/src/runtime/lock_futex.go:142 +0xa4
runtime.stopm()
	/usr/lib/go/src/runtime/proc1.go:1136 +0xfc
runtime.findrunnable(0x10a1aa00, 0x0)
	/usr/lib/go/src/runtime/proc1.go:1538 +0x6d0
runtime.schedule()
	/usr/lib/go/src/runtime/proc1.go:1647 +0x274
runtime.park_m(0x10c00ee0)
	/usr/lib/go/src/runtime/proc1.go:1706 +0x16c
runtime.mcall(0x88f520)
	/usr/lib/go/src/runtime/asm_arm.s:178 +0x5c

goroutine 1 [chan receive]:
testing.RunTests(0x6f1dd8, 0x88da68, 0xbd, 0xbd, 0x0)
	/usr/lib/go/src/testing/testing.go:562 +0x618
testing.(*M).Run(0x10b88f7c, 0x12380)
	/usr/lib/go/src/testing/testing.go:494 +0x6c
main.main()
	github.com/hashicorp/consul/consul/_test/_testmain.go:432 +0x118

goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
	/usr/lib/go/src/runtime/asm_arm.s:1036 +0x4

goroutine 19 [syscall, 9 minutes]:
os/signal.loop()
	/usr/lib/go/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.1
	/usr/lib/go/src/os/signal/signal_unix.go:28 +0x30

goroutine 8188 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x10f869c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 8269 [chan receive]:
github.com/hashicorp/raft.(*deferError).Error(0x10aa0fc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10cfb6c0, 0x10ffaec0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 8186 [IO wait]:
net.runtime_pollWait(0xb5b61a08, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f81338, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f81338, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10f81300, 0x10ea0000, 0x1000, 0x1000, 0x0, 0xb5ba1018, 0x10a60000)
	/usr/lib/go/src/net/fd_unix.go:232 +0x1c4
net.(*conn).Read(0x10f48418, 0x10ea0000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/net.go:172 +0xc8
bufio.(*Reader).fill(0x10f7d830)
	/usr/lib/go/src/bufio/bufio.go:97 +0x1c4
bufio.(*Reader).ReadByte(0x10f7d830, 0x10f81300, 0x0, 0x0)
	/usr/lib/go/src/bufio/bufio.go:229 +0x8c
github.com/hashicorp/go-msgpack/codec.(*ioDecReader).readn1(0x10abae60, 0x16)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:90 +0x48
github.com/hashicorp/go-msgpack/codec.(*msgpackDecDriver).initReadNext(0x110bb290)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/msgpack.go:540 +0x44
github.com/hashicorp/go-msgpack/codec.(*Decoder).decode(0x10f7d890, 0x4ef5e8, 0x10bef420)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:635 +0x54
github.com/hashicorp/go-msgpack/codec.(*Decoder).Decode(0x10f7d890, 0x4ef5e8, 0x10bef420, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:630 +0x74
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).read(0x10f7d7a0, 0x4ef5e8, 0x10bef420, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:113 +0xbc
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).ReadResponseHeader(0x10f7d7a0, 0x10bef420, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:66 +0x40
github.com/hashicorp/net-rpc-msgpackrpc.CallWithCodec(0xb5b19410, 0x10f7d7a0, 0x62c8d0, 0xd, 0x5ad2d0, 0x10b08f50, 0x4f1028, 0x10a5d070, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/client.go:29 +0x130
github.com/hashicorp/consul/consul.TestSessionEndpoint_NodeSessions(0x110d4720)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_endpoint_test.go:454 +0x360
testing.tRunner(0x110d4720, 0x88e224)
	/usr/lib/go/src/testing/testing.go:456 +0xa8
created by testing.RunTests
	/usr/lib/go/src/testing/testing.go:561 +0x5ec

goroutine 8112 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x10d30000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:498 +0xccc
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:242 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10d30000, 0x10f48028)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 8291 [IO wait]:
net.runtime_pollWait(0xb5b62098, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f80eb8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f80eb8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f80e80, 0x0, 0xb5ba56f8, 0x10f76350)
	/usr/lib/go/src/net/fd_unix.go:408 +0x21c
net.(*TCPListener).AcceptTCP(0x10f48280, 0xc0001, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10a647e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:135 +0xcfc

goroutine 8281 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10a64360, 0x10f80940)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:133 +0x1ec
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 8296 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10a647e0, 0x5f5e100, 0x0, 0x10f812c0, 0x10f81240, 0x10f48408)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 8300 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86780, 0x5f3318, 0x5, 0x10aba920)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 8277 [IO wait]:
net.runtime_pollWait(0xb5b61af8, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f805b8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f805b8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f80580, 0x0, 0xb5ba56f8, 0x10f87c70)
	/usr/lib/go/src/net/fd_unix.go:408 +0x21c
net.(*TCPListener).AcceptTCP(0x10f48088, 0xc0001, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10a64360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:135 +0xcfc

goroutine 8297 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10a86780)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 8290 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10d1a080)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 8187 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x10f66a50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 8280 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10a64360, 0x5f5e100, 0x0, 0x10f809c0, 0x10f80940, 0x10f48200)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 8283 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10a86640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 8286 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86640, 0x5f3318, 0x5, 0x10aba340)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 8295 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10a647e0, 0x10f81240)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 8301 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86780, 0x5f4230, 0x5, 0x10aba940)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 8285 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86640, 0x5f3848, 0x6, 0x10aba320)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 8111 [syscall]:
syscall.Syscall(0x94, 0x5, 0x0, 0x0, 0x0, 0xfffefff, 0x6000)
	/usr/lib/go/src/syscall/asm_linux_arm.s:17 +0x8
syscall.Fdatasync(0x5, 0x0, 0x0)
	/usr/lib/go/src/syscall/zsyscall_linux_arm.go:472 +0x44
github.com/boltdb/bolt.fdatasync(0x10ca4100, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/bolt_linux.go:11 +0x40
github.com/boltdb/bolt.(*Tx).write(0x10a12280, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:469 +0x45c
github.com/boltdb/bolt.(*Tx).Commit(0x10a12280, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:179 +0x434
github.com/hashicorp/raft-boltdb.(*BoltStore).StoreLogs(0x10f87740, 0x10a26510, 0x1, 0x1, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:157 +0x484
github.com/hashicorp/raft.(*LogCache).StoreLogs(0x10f66db0, 0x10a26510, 0x1, 0x1, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/log_cache.go:61 +0x1bc
github.com/hashicorp/raft.(*Raft).dispatchLogs(0x10d30000, 0x10c6dd1c, 0x1, 0x1)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1039 +0x40c
github.com/hashicorp/raft.(*Raft).leaderLoop(0x10d30000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:897 +0xa34
github.com/hashicorp/raft.(*Raft).runLeader(0x10d30000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:787 +0x788
github.com/hashicorp/raft.(*Raft).run(0x10d30000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:584 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:241 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10d30000, 0x10f48020)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 8302 [select]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10cfb6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 8293 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10a647e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:137 +0xd34

goroutine 8292 [IO wait]:
net.runtime_pollWait(0xb5b61be8, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f80ef8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f80ef8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x10f80ec0, 0x10ef0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb5ba1018, 0x10a60000)
	/usr/lib/go/src/net/fd_unix.go:259 +0x20c
net.(*UDPConn).ReadFromUDP(0x10f48288, 0x10ef0000, 0x10000, 0x10000, 0x501578, 0x10000, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10f48288, 0x10ef0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10a647e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:136 +0xd18

goroutine 8299 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86780, 0x5f3848, 0x6, 0x10aba900)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 8287 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10a86640, 0x5f4230, 0x5, 0x10aba360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 8113 [select]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x10d30000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1602 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:243 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10d30000, 0x10f48030)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 8279 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10a64360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:137 +0xd34

goroutine 8278 [IO wait]:
net.runtime_pollWait(0xb5b61eb8, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f80678, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f80678, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x10f80640, 0x10b3c000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb5ba1018, 0x10a60000)
	/usr/lib/go/src/net/fd_unix.go:259 +0x20c
net.(*UDPConn).ReadFromUDP(0x10f48090, 0x10b3c000, 0x10000, 0x10000, 0x501578, 0x10000, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10f48090, 0x10b3c000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10a64360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:136 +0xd18

goroutine 8298 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10a86780)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 8284 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10a86640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 8289 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10aba840)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 8282 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10a64360, 0x5f5e100, 0x0, 0x10f80a00, 0x10f80940, 0x10f48218)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 8288 [select]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10cfb6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 8276 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10d1a000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 8304 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10cfb6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 8294 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10a647e0, 0x5f5e100, 0x0, 0x10f81280, 0x10f81240, 0x10f483f8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 8274 [select]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10cfb6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 8190 [chan receive]:
github.com/hashicorp/raft.(*deferError).Error(0x10e74300, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).raftApply(0x10cfb6c0, 0x10e90f03, 0x5ad2d0, 0x10f6b180, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:300 +0x25c
github.com/hashicorp/consul/consul.(*Session).Apply(0x10a5c630, 0x10f6b180, 0x10f48d98, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_endpoint.go:77 +0x78c
reflect.Value.call(0x5607b0, 0x5b3fe4, 0x13, 0x5f2db8, 0x4, 0x10a39e3c, 0x3, 0x3, 0x0, 0x0, ...)
	/usr/lib/go/src/reflect/value.go:432 +0xeb4
reflect.Value.Call(0x5607b0, 0x5b3fe4, 0x13, 0x10a39e3c, 0x3, 0x3, 0x0, 0x0, 0x0)
	/usr/lib/go/src/reflect/value.go:300 +0x84
net/rpc.(*service).call(0x10d650c0, 0x10f4e800, 0x10ea8ad0, 0x10ce2730, 0x10cc23a0, 0x5ad2d0, 0x10f6b180, 0x16, 0x4f1028, 0x10f48d98, ...)
	/usr/lib/go/src/net/rpc/server.go:383 +0x154
net/rpc.(*Server).ServeRequest(0x10f4e800, 0xb5ba57a0, 0x10f678c0, 0x0, 0x0)
	/usr/lib/go/src/net/rpc/server.go:498 +0x1d8
github.com/hashicorp/consul/consul.(*Server).handleConsulConn(0x10cfb6c0, 0xb5b19320, 0x10a5c998)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:177 +0xfc
github.com/hashicorp/consul/consul.(*Server).handleConn(0x10cfb6c0, 0xb5b19320, 0x10a5c998, 0x10e1b700)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:102 +0x3e4
created by github.com/hashicorp/consul/consul.(*Server).listen
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:68 +0x164

goroutine 8303 [IO wait]:
net.runtime_pollWait(0xb5b62020, 0x72, 0x10a60000)
	/usr/lib/go/src/runtime/netpoll.go:157 +0x60
net.(*pollDesc).Wait(0x10f4e8f8, 0x72, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f4e8f8, 0x0, 0x0)
	/usr/lib/go/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f4e8c0, 0x0, 0xb5ba56f8, 0x10f87dd0)
	/usr/lib/go/src/net/fd_unix.go:408 +0x21c
net.(*TCPListener).AcceptTCP(0x10a5c650, 0xb73f4, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10a5c650, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10cfb6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:59 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 8189 [select]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x10d654e0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x10ce3a90)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:346 +0x50
created by github.com/hashicorp/raft.NewNetworkTransport
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:138 +0x274

goroutine 8275 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10aba220)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

trap    0x6
error   0x0
oldmask 0x0
r0      0x88fac8
r1      0x0
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x1338a0
r7      0xf0
r8      0x111e6f
r9      0x0
r10     0x88f500
fp      0x88f314
ip      0x10a14541
sp      0xbed033dc
lr      0x3b658
pc      0x71d64
cpsr    0xa0000010
fault   0x0
*** Test killed with quit: ran too long (10m0s).
FAIL	github.com/hashicorp/consul/consul	600.117s
=== RUN   TestDelay
--- PASS: TestDelay (0.50s)
=== RUN   TestGraveyard_Lifecycle
--- PASS: TestGraveyard_Lifecycle (0.00s)
=== RUN   TestGraveyard_GC_Trigger
--- PASS: TestGraveyard_GC_Trigger (0.12s)
=== RUN   TestGraveyard_Snapshot_Restore
--- PASS: TestGraveyard_Snapshot_Restore (0.00s)
=== RUN   TestNotifyGroup
--- PASS: TestNotifyGroup (0.00s)
=== RUN   TestNotifyGroup_Clear
--- PASS: TestNotifyGroup_Clear (0.00s)
=== RUN   TestStateStore_PreparedQuery_isUUID
--- PASS: TestStateStore_PreparedQuery_isUUID (0.01s)
=== RUN   TestStateStore_PreparedQuerySet_PreparedQueryGet
--- PASS: TestStateStore_PreparedQuerySet_PreparedQueryGet (0.01s)
=== RUN   TestStateStore_PreparedQueryDelete
--- PASS: TestStateStore_PreparedQueryDelete (0.01s)
=== RUN   TestStateStore_PreparedQueryLookup
--- PASS: TestStateStore_PreparedQueryLookup (0.00s)
=== RUN   TestStateStore_PreparedQueryList
--- PASS: TestStateStore_PreparedQueryList (0.02s)
=== RUN   TestStateStore_PreparedQuery_Snapshot_Restore
--- PASS: TestStateStore_PreparedQuery_Snapshot_Restore (0.02s)
=== RUN   TestStateStore_PreparedQuery_Watches
--- PASS: TestStateStore_PreparedQuery_Watches (0.01s)
=== RUN   TestStateStore_Schema
--- PASS: TestStateStore_Schema (0.00s)
=== RUN   TestStateStore_Restore_Abort
--- PASS: TestStateStore_Restore_Abort (0.00s)
=== RUN   TestStateStore_maxIndex
--- PASS: TestStateStore_maxIndex (0.00s)
=== RUN   TestStateStore_indexUpdateMaxTxn
--- PASS: TestStateStore_indexUpdateMaxTxn (0.00s)
=== RUN   TestStateStore_GC
--- PASS: TestStateStore_GC (0.07s)
=== RUN   TestStateStore_ReapTombstones
--- PASS: TestStateStore_ReapTombstones (0.00s)
=== RUN   TestStateStore_GetWatches
--- PASS: TestStateStore_GetWatches (0.00s)
=== RUN   TestStateStore_EnsureRegistration
--- PASS: TestStateStore_EnsureRegistration (0.01s)
=== RUN   TestStateStore_EnsureRegistration_Restore
--- PASS: TestStateStore_EnsureRegistration_Restore (0.01s)
=== RUN   TestStateStore_EnsureRegistration_Watches
--- PASS: TestStateStore_EnsureRegistration_Watches (0.01s)
=== RUN   TestStateStore_EnsureNode
--- PASS: TestStateStore_EnsureNode (0.00s)
=== RUN   TestStateStore_GetNodes
--- PASS: TestStateStore_GetNodes (0.00s)
=== RUN   TestStateStore_DeleteNode
--- PASS: TestStateStore_DeleteNode (0.01s)
=== RUN   TestStateStore_Node_Snapshot
--- PASS: TestStateStore_Node_Snapshot (0.01s)
=== RUN   TestStateStore_Node_Watches
--- PASS: TestStateStore_Node_Watches (0.01s)
=== RUN   TestStateStore_EnsureService
--- PASS: TestStateStore_EnsureService (0.01s)
=== RUN   TestStateStore_Services
--- PASS: TestStateStore_Services (0.00s)
=== RUN   TestStateStore_ServiceNodes
--- PASS: TestStateStore_ServiceNodes (0.01s)
=== RUN   TestStateStore_ServiceTagNodes
--- PASS: TestStateStore_ServiceTagNodes (0.00s)
=== RUN   TestStateStore_ServiceTagNodes_MultipleTags
--- PASS: TestStateStore_ServiceTagNodes_MultipleTags (0.01s)
=== RUN   TestStateStore_DeleteService
--- PASS: TestStateStore_DeleteService (0.01s)
=== RUN   TestStateStore_Service_Snapshot
--- PASS: TestStateStore_Service_Snapshot (0.01s)
=== RUN   TestStateStore_Service_Watches
--- PASS: TestStateStore_Service_Watches (0.01s)
=== RUN   TestStateStore_EnsureCheck
--- PASS: TestStateStore_EnsureCheck (0.01s)
=== RUN   TestStateStore_EnsureCheck_defaultStatus
--- PASS: TestStateStore_EnsureCheck_defaultStatus (0.00s)
=== RUN   TestStateStore_NodeChecks
--- PASS: TestStateStore_NodeChecks (0.01s)
=== RUN   TestStateStore_ServiceChecks
--- PASS: TestStateStore_ServiceChecks (0.01s)
=== RUN   TestStateStore_ChecksInState
--- PASS: TestStateStore_ChecksInState (0.00s)
=== RUN   TestStateStore_DeleteCheck
--- PASS: TestStateStore_DeleteCheck (0.00s)
=== RUN   TestStateStore_CheckServiceNodes
--- PASS: TestStateStore_CheckServiceNodes (0.03s)
=== RUN   TestStateStore_CheckServiceTagNodes
--- PASS: TestStateStore_CheckServiceTagNodes (0.00s)
=== RUN   TestStateStore_Check_Snapshot
--- PASS: TestStateStore_Check_Snapshot (0.01s)
=== RUN   TestStateStore_Check_Watches
--- PASS: TestStateStore_Check_Watches (0.00s)
=== RUN   TestStateStore_NodeInfo_NodeDump
--- PASS: TestStateStore_NodeInfo_NodeDump (0.02s)
=== RUN   TestStateStore_KVSSet_KVSGet
--- PASS: TestStateStore_KVSSet_KVSGet (0.00s)
=== RUN   TestStateStore_KVSList
--- PASS: TestStateStore_KVSList (0.01s)
=== RUN   TestStateStore_KVSListKeys
--- PASS: TestStateStore_KVSListKeys (0.01s)
=== RUN   TestStateStore_KVSDelete
--- PASS: TestStateStore_KVSDelete (0.00s)
=== RUN   TestStateStore_KVSDeleteCAS
--- PASS: TestStateStore_KVSDeleteCAS (0.00s)
=== RUN   TestStateStore_KVSSetCAS
--- PASS: TestStateStore_KVSSetCAS (0.00s)
=== RUN   TestStateStore_KVSDeleteTree
--- PASS: TestStateStore_KVSDeleteTree (0.01s)
=== RUN   TestStateStore_KVSLockDelay
--- PASS: TestStateStore_KVSLockDelay (0.00s)
=== RUN   TestStateStore_KVSLock
--- PASS: TestStateStore_KVSLock (0.01s)
=== RUN   TestStateStore_KVSUnlock
--- PASS: TestStateStore_KVSUnlock (0.00s)
=== RUN   TestStateStore_KVS_Snapshot_Restore
--- PASS: TestStateStore_KVS_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_KVS_Watches
--- PASS: TestStateStore_KVS_Watches (0.01s)
=== RUN   TestStateStore_Tombstone_Snapshot_Restore
--- PASS: TestStateStore_Tombstone_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_SessionCreate_SessionGet
--- PASS: TestStateStore_SessionCreate_SessionGet (0.01s)
=== RUN   TestStateStore_NodeSessions
--- PASS: TestStateStore_NodeSessions (0.01s)
=== RUN   TestStateStore_SessionDestroy
--- PASS: TestStateStore_SessionDestroy (0.00s)
=== RUN   TestStateStore_Session_Snapshot_Restore
--- PASS: TestStateStore_Session_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_Session_Watches
--- PASS: TestStateStore_Session_Watches (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteNode
--- PASS: TestStateStore_Session_Invalidate_DeleteNode (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteService
--- PASS: TestStateStore_Session_Invalidate_DeleteService (0.01s)
=== RUN   TestStateStore_Session_Invalidate_Critical_Check
--- PASS: TestStateStore_Session_Invalidate_Critical_Check (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteCheck
--- PASS: TestStateStore_Session_Invalidate_DeleteCheck (0.01s)
=== RUN   TestStateStore_Session_Invalidate_Key_Unlock_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Unlock_Behavior (0.00s)
=== RUN   TestStateStore_Session_Invalidate_Key_Delete_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Delete_Behavior (0.00s)
=== RUN   TestStateStore_Session_Invalidate_PreparedQuery_Delete
--- PASS: TestStateStore_Session_Invalidate_PreparedQuery_Delete (0.01s)
=== RUN   TestStateStore_ACLSet_ACLGet
--- PASS: TestStateStore_ACLSet_ACLGet (0.00s)
=== RUN   TestStateStore_ACLList
--- PASS: TestStateStore_ACLList (0.00s)
=== RUN   TestStateStore_ACLDelete
--- PASS: TestStateStore_ACLDelete (0.00s)
=== RUN   TestStateStore_ACL_Snapshot_Restore
--- PASS: TestStateStore_ACL_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_ACL_Watches
--- PASS: TestStateStore_ACL_Watches (0.00s)
=== RUN   TestStateStore_Coordinate_Updates
--- PASS: TestStateStore_Coordinate_Updates (0.00s)
=== RUN   TestStateStore_Coordinate_Cleanup
--- PASS: TestStateStore_Coordinate_Cleanup (0.00s)
=== RUN   TestStateStore_Coordinate_Snapshot_Restore
--- PASS: TestStateStore_Coordinate_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_Coordinate_Watches
--- PASS: TestStateStore_Coordinate_Watches (0.00s)
=== RUN   TestTombstoneGC_invalid
--- PASS: TestTombstoneGC_invalid (0.00s)
=== RUN   TestTombstoneGC
--- PASS: TestTombstoneGC (0.03s)
=== RUN   TestTombstoneGC_Expire
--- PASS: TestTombstoneGC_Expire (0.02s)
=== RUN   TestWatch_FullTableWatch
--- PASS: TestWatch_FullTableWatch (0.00s)
=== RUN   TestWatch_DumbWatchManager
--- PASS: TestWatch_DumbWatchManager (0.00s)
=== RUN   TestWatch_PrefixWatch
--- PASS: TestWatch_PrefixWatch (0.00s)
=== RUN   TestWatch_MultiWatch
--- PASS: TestWatch_MultiWatch (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/state	1.340s
=== RUN   TestEncodeDecode
--- PASS: TestEncodeDecode (0.00s)
=== RUN   TestStructs_Implements
--- PASS: TestStructs_Implements (0.00s)
=== RUN   TestStructs_ServiceNode_Clone
--- PASS: TestStructs_ServiceNode_Clone (0.00s)
=== RUN   TestStructs_ServiceNode_Conversions
--- PASS: TestStructs_ServiceNode_Conversions (0.00s)
=== RUN   TestStructs_NodeService_IsSame
--- PASS: TestStructs_NodeService_IsSame (0.00s)
=== RUN   TestStructs_HealthCheck_IsSame
--- PASS: TestStructs_HealthCheck_IsSame (0.00s)
=== RUN   TestStructs_CheckServiceNodes_Shuffle
--- PASS: TestStructs_CheckServiceNodes_Shuffle (0.04s)
=== RUN   TestStructs_CheckServiceNodes_Filter
--- PASS: TestStructs_CheckServiceNodes_Filter (0.00s)
=== RUN   TestStructs_DirEntry_Clone
--- PASS: TestStructs_DirEntry_Clone (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/structs	0.138s
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul/testutil	0.053s
=== RUN   TestConfig_AppendCA_None
--- PASS: TestConfig_AppendCA_None (0.00s)
=== RUN   TestConfig_CACertificate_Valid
--- PASS: TestConfig_CACertificate_Valid (0.00s)
=== RUN   TestConfig_KeyPair_None
--- PASS: TestConfig_KeyPair_None (0.00s)
=== RUN   TestConfig_KeyPair_Valid
--- PASS: TestConfig_KeyPair_Valid (0.01s)
=== RUN   TestConfig_OutgoingTLS_MissingCA
--- PASS: TestConfig_OutgoingTLS_MissingCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_OnlyCA
--- PASS: TestConfig_OutgoingTLS_OnlyCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyOutgoing
--- PASS: TestConfig_OutgoingTLS_VerifyOutgoing (0.00s)
=== RUN   TestConfig_OutgoingTLS_ServerName
--- PASS: TestConfig_OutgoingTLS_ServerName (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyHostname
--- PASS: TestConfig_OutgoingTLS_VerifyHostname (0.00s)
=== RUN   TestConfig_OutgoingTLS_WithKeyPair
--- PASS: TestConfig_OutgoingTLS_WithKeyPair (0.01s)
=== RUN   TestConfig_outgoingWrapper_OK
--- PASS: TestConfig_outgoingWrapper_OK (0.45s)
=== RUN   TestConfig_outgoingWrapper_BadDC
--- PASS: TestConfig_outgoingWrapper_BadDC (0.31s)
=== RUN   TestConfig_outgoingWrapper_BadCert
--- PASS: TestConfig_outgoingWrapper_BadCert (0.07s)
=== RUN   TestConfig_wrapTLS_OK
--- PASS: TestConfig_wrapTLS_OK (0.18s)
=== RUN   TestConfig_wrapTLS_BadCert
--- PASS: TestConfig_wrapTLS_BadCert (0.39s)
=== RUN   TestConfig_IncomingTLS
--- PASS: TestConfig_IncomingTLS (0.01s)
=== RUN   TestConfig_IncomingTLS_MissingCA
--- PASS: TestConfig_IncomingTLS_MissingCA (0.01s)
=== RUN   TestConfig_IncomingTLS_MissingKey
--- PASS: TestConfig_IncomingTLS_MissingKey (0.01s)
=== RUN   TestConfig_IncomingTLS_NoVerify
--- PASS: TestConfig_IncomingTLS_NoVerify (0.00s)
PASS
ok  	github.com/hashicorp/consul/tlsutil	1.504s
=== RUN   TestKeyWatch
--- SKIP: TestKeyWatch (0.00s)
	funcs_test.go:19: 
=== RUN   TestKeyPrefixWatch
--- SKIP: TestKeyPrefixWatch (0.00s)
	funcs_test.go:73: 
=== RUN   TestServicesWatch
--- SKIP: TestServicesWatch (0.00s)
	funcs_test.go:129: 
=== RUN   TestNodesWatch
--- SKIP: TestNodesWatch (0.00s)
	funcs_test.go:172: 
=== RUN   TestServiceWatch
--- SKIP: TestServiceWatch (0.00s)
	funcs_test.go:221: 
=== RUN   TestChecksWatch_State
--- SKIP: TestChecksWatch_State (0.00s)
	funcs_test.go:270: 
=== RUN   TestChecksWatch_Service
--- SKIP: TestChecksWatch_Service (0.00s)
	funcs_test.go:330: 
=== RUN   TestEventWatch
--- SKIP: TestEventWatch (0.00s)
	funcs_test.go:398: 
=== RUN   TestRun_Stop
--- PASS: TestRun_Stop (0.01s)
=== RUN   TestParseBasic
--- PASS: TestParseBasic (0.00s)
=== RUN   TestParse_exempt
--- PASS: TestParse_exempt (0.00s)
PASS
ok  	github.com/hashicorp/consul/watch	0.049s
dh_auto_test: go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch returned exit code 2
debian/rules:7: recipe for target 'override_dh_auto_test' failed
make[1]: [override_dh_auto_test] Error 2 (ignored)
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
 fakeroot debian/rules binary-arch
dh binary-arch --buildsystem=golang --with=golang
   dh_testroot -a -O--buildsystem=golang
   dh_prep -a -O--buildsystem=golang
   dh_auto_install -a -O--buildsystem=golang
	mkdir -p /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr
	cp -r bin /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr
	mkdir -p /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
	cp -r -T src/github.com/hashicorp/consul /<<BUILDDIR>>/consul-0.6.3\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
   dh_install -a -O--buildsystem=golang
   dh_installdocs -a -O--buildsystem=golang
   dh_installchangelogs -a -O--buildsystem=golang
   dh_perl -a -O--buildsystem=golang
   dh_link -a -O--buildsystem=golang
   dh_strip_nondeterminism -a -O--buildsystem=golang
   dh_compress -a -O--buildsystem=golang
   dh_fixperms -a -O--buildsystem=golang
   dh_strip -a -O--buildsystem=golang
   dh_makeshlibs -a -O--buildsystem=golang
   dh_shlibdeps -a -O--buildsystem=golang
   dh_installdeb -a -O--buildsystem=golang
   dh_golang -a -O--buildsystem=golang
   dh_gencontrol -a -O--buildsystem=golang
dpkg-gencontrol: warning: File::FcntlLock not available; using flock which is not NFS-safe
   dh_md5sums -a -O--buildsystem=golang
   dh_builddeb -u-Zxz -a -O--buildsystem=golang
dpkg-deb: building package 'consul' in '../consul_0.6.3~dfsg-1_armhf.deb'.
 dpkg-genchanges -B -mRaspbian wandboard test autobuilder <root@raspbian.org> >../consul_0.6.3~dfsg-1_armhf.changes
dpkg-genchanges: binary-only arch-specific upload (source code and arch-indep packages not included)
 dpkg-source --after-build consul-0.6.3~dfsg
dpkg-buildpackage: binary-only upload (no source included)
--------------------------------------------------------------------------------
Build finished at 20160317-0646

Finished
--------

I: Built successfully

+------------------------------------------------------------------------------+
| Post Build Chroot                                                            |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Changes                                                                      |
+------------------------------------------------------------------------------+


consul_0.6.3~dfsg-1_armhf.changes:
----------------------------------

Format: 1.8
Date: Fri, 06 Nov 2015 12:20:36 -0800
Source: consul
Binary: golang-github-hashicorp-consul-dev consul
Architecture: armhf
Version: 0.6.3~dfsg-1
Distribution: stretch-staging
Urgency: medium
Maintainer: Raspbian wandboard test autobuilder <root@raspbian.org>
Changed-By: Tianon Gravi <tianon@debian.org>
Description:
 consul     - tool for service discovery, monitoring and configuration
 golang-github-hashicorp-consul-dev - tool for service discovery, monitoring and configuration (source)
Closes: 804277
Changes:
 consul (0.6.3~dfsg-1) unstable; urgency=medium
 .
   * Initial release (Closes: #804277).
Checksums-Sha1:
 7339a94883849a5e7fed7efc88052fe5b9fa9525 2665890 consul_0.6.3~dfsg-1_armhf.deb
Checksums-Sha256:
 c6ca29240ee670d731898bccffd9567c442861350758a6e1762b6c47d9d2951a 2665890 consul_0.6.3~dfsg-1_armhf.deb
Files:
 dce8ba964ae0cdf5a9fd9e7b7fe39e53 2665890 devel extra consul_0.6.3~dfsg-1_armhf.deb

+------------------------------------------------------------------------------+
| Package contents                                                             |
+------------------------------------------------------------------------------+


consul_0.6.3~dfsg-1_armhf.deb
-----------------------------

 new debian package, version 2.0.
 size 2665890 bytes: control archive=1605 bytes.
    3076 bytes,    34 lines      control              
     257 bytes,     4 lines      md5sums              
 Package: consul
 Version: 0.6.3~dfsg-1
 Architecture: armhf
 Maintainer: Debian Go Packaging Team <pkg-go-maintainers@lists.alioth.debian.org>
 Installed-Size: 10892
 Depends: libc6 (>= 2.4)
 Built-Using: consul-migrate (= 0.1.0-1), golang (= 2:1.5.3-1+rpi1), golang-dns (= 0.0~git20151030.0.6a15566-1), golang-github-armon-circbuf (= 0.0~git20150827.0.bbbad09-1), golang-github-armon-go-metrics (= 0.0~git20151207.0.06b6099-1), golang-github-armon-go-radix (= 0.0~git20150602.0.fbd82e8-1), golang-github-armon-gomdb (= 0.0~git20150106.0.151f2e0-1), golang-github-elazarl-go-bindata-assetfs (= 0.0~git20151224.0.57eb5e1-1), golang-github-fsouza-go-dockerclient (= 0.0+git20150905-1), golang-github-hashicorp-go-checkpoint (= 0.0~git20151022.0.e4b2dc3-1), golang-github-hashicorp-go-memdb (= 0.0~git20160301.0.98f52f5-1), golang-github-hashicorp-go-msgpack (= 0.0~git20150518-1), golang-github-hashicorp-go-reap (= 0.0~git20160113.0.2d85522-1), golang-github-hashicorp-go-syslog (= 0.0~git20150218.0.42a2b57-1), golang-github-hashicorp-golang-lru (= 0.0~git20160207.0.a0d98a5-1), golang-github-hashicorp-hcl (= 0.0~git20151110.0.fa160f1-1), golang-github-hashicorp-logutils (= 0.0~git20150609.0.0dc08b1-1), golang-github-hashicorp-memberlist (= 0.0~git20160225.0.ae9a8d9-1), golang-github-hashicorp-raft (= 0.0~git20150728.9b586e2-2), golang-github-hashicorp-raft-boltdb (= 0.0~git20150201.d1e82c1-1), golang-github-hashicorp-scada-client (= 0.0~git20150828.0.84989fd-1), golang-github-hashicorp-serf (= 0.7.0~ds1-1), golang-github-hashicorp-yamux (= 0.0~git20151129.0.df94978-1), golang-github-inconshreveable-muxado (= 0.0~git20140312.0.f693c7e-1), golang-github-mitchellh-cli (= 0.0~git20150618.0.8102d0e-1), golang-github-mitchellh-mapstructure (= 0.0~git20150717.0.281073e-2), golang-github-ryanuber-columnize (= 2.0.1-1)
 Section: devel
 Priority: extra
 Homepage: https://github.com/hashicorp/consul
 Description: tool for service discovery, monitoring and configuration
  Consul is a tool for service discovery and configuration. Consul is
  distributed, highly available, and extremely scalable.
  .
  Consul provides several key features:
  .
   - Service Discovery - Consul makes it simple for services to register
     themselves and to discover other services via a DNS or HTTP interface.
     External services such as SaaS providers can be registered as well.
  .
   - Health Checking - Health Checking enables Consul to quickly alert operators
     about any issues in a cluster. The integration with service discovery
     prevents routing traffic to unhealthy hosts and enables service level
     circuit breakers.
  .
   - Key/Value Storage - A flexible key/value store enables storing dynamic
     configuration, feature flagging, coordination, leader election and more. The
     simple HTTP API makes it easy to use anywhere.
  .
   - Multi-Datacenter - Consul is built to be datacenter aware, and can support
     any number of regions without complex configuration.
  .
  Consul runs on Linux, Mac OS X, and Windows. It is recommended to run the
  Consul servers only on Linux, however.

drwxr-xr-x root/root         0 2016-03-17 06:45 ./
drwxr-xr-x root/root         0 2016-03-17 06:45 ./usr/
drwxr-xr-x root/root         0 2016-03-17 06:45 ./usr/bin/
-rwxr-xr-x root/root  11117564 2016-03-17 06:45 ./usr/bin/consul
drwxr-xr-x root/root         0 2016-03-17 06:45 ./usr/share/
drwxr-xr-x root/root         0 2016-03-17 06:45 ./usr/share/doc/
drwxr-xr-x root/root         0 2016-03-17 06:45 ./usr/share/doc/consul/
-rw-r--r-- root/root       162 2016-03-08 23:37 ./usr/share/doc/consul/changelog.Debian.gz
-rw-r--r-- root/root      9218 2016-01-14 18:28 ./usr/share/doc/consul/changelog.gz
-rw-r--r-- root/root     16831 2016-03-07 08:52 ./usr/share/doc/consul/copyright


+------------------------------------------------------------------------------+
| Post Build                                                                   |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Cleanup                                                                      |
+------------------------------------------------------------------------------+

Purging /<<BUILDDIR>>
Not cleaning session: cloned chroot in use

+------------------------------------------------------------------------------+
| Summary                                                                      |
+------------------------------------------------------------------------------+

Build Architecture: armhf
Build-Space: 81984
Build-Time: 895
Distribution: stretch-staging
Host Architecture: armhf
Install-Time: 547
Job: consul_0.6.3~dfsg-1
Machine Architecture: armhf
Package: consul
Package-Time: 1484
Source-Version: 0.6.3~dfsg-1
Space: 81984
Status: successful
Version: 0.6.3~dfsg-1
--------------------------------------------------------------------------------
Finished at 20160317-0646
Build needed 00:24:44, 81984k disc space