Raspbian Package Auto-Building

Build log for consul (0.6.4~dfsg-3) on armhf

consul0.6.4~dfsg-3armhf → 2016-06-23 07:55:58

sbuild (Debian sbuild) 0.66.0 (04 Oct 2015) on testwandboard

+==============================================================================+
| consul 0.6.4~dfsg-3 (armhf)                                23 Jun 2016 07:26 |
+==============================================================================+

Package: consul
Version: 0.6.4~dfsg-3
Source Version: 0.6.4~dfsg-3
Distribution: stretch-staging
Machine Architecture: armhf
Host Architecture: armhf
Build Architecture: armhf

I: NOTICE: Log filtering will replace 'build/consul-4Zs6FL/consul-0.6.4~dfsg' with '<<PKGBUILDDIR>>'
I: NOTICE: Log filtering will replace 'build/consul-4Zs6FL' with '<<BUILDDIR>>'
I: NOTICE: Log filtering will replace 'var/lib/schroot/mount/stretch-staging-armhf-sbuild-c2677bfe-326a-403f-9d38-57e9361bb9bb' with '<<CHROOT>>'

+------------------------------------------------------------------------------+
| Update chroot                                                                |
+------------------------------------------------------------------------------+

Get:1 http://172.17.0.1/private stretch-staging InRelease [11.3 kB]
Get:2 http://172.17.0.1/private stretch-staging/main Sources [9077 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf Packages [11.1 MB]
Fetched 20.1 MB in 1min 16s (263 kB/s)
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Fetch source files                                                           |
+------------------------------------------------------------------------------+


Check APT
---------

Checking available source versions...

Download source files with APT
------------------------------

Reading package lists...
NOTICE: 'consul' packaging is maintained in the 'Git' version control system at:
https://anonscm.debian.org/git/pkg-go/packages/golang-github-hashicorp-consul.git
Please use:
git clone https://anonscm.debian.org/git/pkg-go/packages/golang-github-hashicorp-consul.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 601 kB of source archives.
Get:1 http://172.17.0.1/private stretch-staging/main consul 0.6.4~dfsg-3 (dsc) [3321 B]
Get:2 http://172.17.0.1/private stretch-staging/main consul 0.6.4~dfsg-3 (tar) [586 kB]
Get:3 http://172.17.0.1/private stretch-staging/main consul 0.6.4~dfsg-3 (diff) [11.1 kB]
Fetched 601 kB in 1s (305 kB/s)
Download complete and in download only mode

Check architectures
-------------------


Check dependencies
------------------

Merged Build-Depends: build-essential, fakeroot
Filtered Build-Depends: build-essential, fakeroot
dpkg-deb: building package 'sbuild-build-depends-core-dummy' in '/<<BUILDDIR>>/resolver-QxXOFB/apt_archive/sbuild-build-depends-core-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Sources [194 B]
Get:5 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ Packages [508 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
W: file:///<<BUILDDIR>>/resolver-QxXOFB/apt_archive/./Release.gpg: Signature by key 3493EC2B8E6DC280C121C60435506D9A48F77B2E uses weak digest algorithm (SHA1)
Reading package lists...

+------------------------------------------------------------------------------+
| Install core build dependencies (apt-based resolver)                         |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  sbuild-build-depends-core-dummy
0 upgraded, 1 newly installed, 0 to remove and 24 not upgraded.
Need to get 0 B/768 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-QxXOFB/apt_archive ./ sbuild-build-depends-core-dummy 0.invalid.0 [768 B]
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package sbuild-build-depends-core-dummy.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13899 files and directories currently installed.)
Preparing to unpack .../sbuild-build-depends-core-dummy.deb ...
Unpacking sbuild-build-depends-core-dummy (0.invalid.0) ...
Setting up sbuild-build-depends-core-dummy (0.invalid.0) ...
W: No sandbox user '_apt' on the system, can not drop privileges
Merged Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev | golang-github-miekg-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-go-uuid-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-hil-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-copystructure-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
Filtered Build-Depends: debhelper (>= 9), dh-golang, golang-go, golang-dns-dev, golang-github-armon-circbuf-dev, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-elazarl-go-bindata-assetfs-dev (>= 0.0~git20151224~), golang-github-fsouza-go-dockerclient-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-reap-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-go-uuid-dev, golang-github-hashicorp-golang-lru-dev (>= 0.0~git20160207~), golang-github-hashicorp-hcl-dev, golang-github-hashicorp-hil-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev (>= 0.0~git20160225~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-raft-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev (>= 0.7.0~), golang-github-hashicorp-yamux-dev (>= 0.0~git20151129~), golang-github-inconshreveable-muxado-dev, golang-github-mitchellh-cli-dev, golang-github-mitchellh-copystructure-dev, golang-github-mitchellh-mapstructure-dev, golang-github-ryanuber-columnize-dev
dpkg-deb: building package 'sbuild-build-depends-consul-dummy' in '/<<BUILDDIR>>/resolver-1KzCP1/apt_archive/sbuild-build-depends-consul-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Sources [483 B]
Get:5 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ Packages [799 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
W: file:///<<BUILDDIR>>/resolver-1KzCP1/apt_archive/./Release.gpg: Signature by key 3493EC2B8E6DC280C121C60435506D9A48F77B2E uses weak digest algorithm (SHA1)
Reading package lists...

+------------------------------------------------------------------------------+
| Install consul build dependencies (apt-based resolver)                       |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  autoconf automake autopoint autotools-dev bsdmainutils ca-certificates
  debhelper dh-autoreconf dh-golang dh-strip-nondeterminism file gettext
  gettext-base golang-1.6-go golang-1.6-src golang-check.v1-dev
  golang-dbus-dev golang-dns-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-bgentry-speakeasy-dev golang-github-boltdb-bolt-dev
  golang-github-codegangsta-cli-dev golang-github-coreos-go-systemd-dev
  golang-github-datadog-datadog-go-dev golang-github-davecgh-go-spew-dev
  golang-github-docker-go-units-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-gorilla-context-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-go-uuid-dev golang-github-hashicorp-golang-lru-dev
  golang-github-hashicorp-hcl-dev golang-github-hashicorp-hil-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev golang-github-mattn-go-isatty-dev
  golang-github-mitchellh-cli-dev golang-github-mitchellh-copystructure-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-mitchellh-reflectwalk-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-pmezard-go-difflib-dev
  golang-github-prometheus-client-model-dev
  golang-github-ryanuber-columnize-dev
  golang-github-seccomp-libseccomp-golang-dev
  golang-github-sirupsen-logrus-dev golang-github-stretchr-objx-dev
  golang-github-stretchr-testify-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vishvananda-netlink-dev
  golang-github-vishvananda-netns-dev golang-github-xeipuuv-gojsonpointer-dev
  golang-github-xeipuuv-gojsonreference-dev
  golang-github-xeipuuv-gojsonschema-dev golang-go golang-gocapability-dev
  golang-golang-x-net-dev golang-golang-x-sys-dev golang-golang-x-text-dev
  golang-golang-x-tools golang-golang-x-tools-dev golang-gopkg-check.v1-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-procfs-dev golang-prometheus-client-dev
  golang-protobuf-extensions-dev golang-src golang-x-text-dev groff-base
  intltool-debian libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 libjs-jquery
  libjs-jquery-ui libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5
  libsasl2-2 libsasl2-dev libsasl2-modules-db libseccomp-dev libsigsegv2
  libssl1.0.2 libsystemd-dev libtimedate-perl libtool libunistring0 libxml2 m4
  man-db openssl pkg-config po-debconf protobuf-compiler
Suggested packages:
  autoconf-archive gnu-standards autoconf-doc wamerican | wordlist whois
  vacation dh-make gettext-doc libasprintf-dev libgettextpo-dev bzr git
  mercurial subversion groff libjs-jquery-ui-docs seccomp libtool-doc gfortran
  | fortran95-compiler gcj-jdk less www-browser libmail-box-perl
Recommended packages:
  curl | wget | lynx-cur libglib2.0-data shared-mime-info xdg-user-dirs
  javascript-common libsasl2-modules libltdl-dev xml-core
  libmail-sendmail-perl
The following NEW packages will be installed:
  autoconf automake autopoint autotools-dev bsdmainutils ca-certificates
  debhelper dh-autoreconf dh-golang dh-strip-nondeterminism file gettext
  gettext-base golang-1.6-go golang-1.6-src golang-check.v1-dev
  golang-dbus-dev golang-dns-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-bgentry-speakeasy-dev golang-github-boltdb-bolt-dev
  golang-github-codegangsta-cli-dev golang-github-coreos-go-systemd-dev
  golang-github-datadog-datadog-go-dev golang-github-davecgh-go-spew-dev
  golang-github-docker-go-units-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-gorilla-context-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-reap-dev golang-github-hashicorp-go-syslog-dev
  golang-github-hashicorp-go-uuid-dev golang-github-hashicorp-golang-lru-dev
  golang-github-hashicorp-hcl-dev golang-github-hashicorp-hil-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev golang-github-mattn-go-isatty-dev
  golang-github-mitchellh-cli-dev golang-github-mitchellh-copystructure-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-mitchellh-reflectwalk-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-pmezard-go-difflib-dev
  golang-github-prometheus-client-model-dev
  golang-github-ryanuber-columnize-dev
  golang-github-seccomp-libseccomp-golang-dev
  golang-github-sirupsen-logrus-dev golang-github-stretchr-objx-dev
  golang-github-stretchr-testify-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vishvananda-netlink-dev
  golang-github-vishvananda-netns-dev golang-github-xeipuuv-gojsonpointer-dev
  golang-github-xeipuuv-gojsonreference-dev
  golang-github-xeipuuv-gojsonschema-dev golang-go golang-gocapability-dev
  golang-golang-x-net-dev golang-golang-x-sys-dev golang-golang-x-text-dev
  golang-golang-x-tools golang-golang-x-tools-dev golang-gopkg-check.v1-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-procfs-dev golang-prometheus-client-dev
  golang-protobuf-extensions-dev golang-src golang-x-text-dev groff-base
  intltool-debian libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 libjs-jquery
  libjs-jquery-ui libmagic1 libpipeline1 libprotobuf9v5 libprotoc9v5
  libsasl2-2 libsasl2-dev libsasl2-modules-db libseccomp-dev libsigsegv2
  libssl1.0.2 libsystemd-dev libtimedate-perl libtool libunistring0 libxml2 m4
  man-db openssl pkg-config po-debconf protobuf-compiler
  sbuild-build-depends-consul-dummy
0 upgraded, 128 newly installed, 0 to remove and 24 not upgraded.
Need to get 14.5 MB/64.7 MB of archives.
After this operation, 373 MB of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-1KzCP1/apt_archive ./ sbuild-build-depends-consul-dummy 0.invalid.0 [1058 B]
Get:2 http://172.17.0.1/private stretch-staging/main armhf groff-base armhf 1.22.3-7 [1083 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf libbsd0 armhf 0.8.3-1 [89.0 kB]
Get:4 http://172.17.0.1/private stretch-staging/main armhf bsdmainutils armhf 9.0.10 [177 kB]
Get:5 http://172.17.0.1/private stretch-staging/main armhf libpipeline1 armhf 1.4.1-2 [23.7 kB]
Get:6 http://172.17.0.1/private stretch-staging/main armhf man-db armhf 2.7.5-1 [975 kB]
Get:7 http://172.17.0.1/private stretch-staging/main armhf libssl1.0.2 armhf 1.0.2h-1 [889 kB]
Get:8 http://172.17.0.1/private stretch-staging/main armhf libmagic1 armhf 1:5.25-2 [250 kB]
Get:9 http://172.17.0.1/private stretch-staging/main armhf file armhf 1:5.25-2 [61.2 kB]
Get:10 http://172.17.0.1/private stretch-staging/main armhf gettext-base armhf 0.19.8.1-1 [117 kB]
Get:11 http://172.17.0.1/private stretch-staging/main armhf libsasl2-modules-db armhf 2.1.26.dfsg1-15 [65.6 kB]
Get:12 http://172.17.0.1/private stretch-staging/main armhf libsasl2-2 armhf 2.1.26.dfsg1-15 [96.7 kB]
Get:13 http://172.17.0.1/private stretch-staging/main armhf libsigsegv2 armhf 2.10-5 [28.4 kB]
Get:14 http://172.17.0.1/private stretch-staging/main armhf m4 armhf 1.4.17-5 [239 kB]
Get:15 http://172.17.0.1/private stretch-staging/main armhf autoconf all 2.69-10 [338 kB]
Get:16 http://172.17.0.1/private stretch-staging/main armhf autotools-dev all 20160430.1 [72.6 kB]
Get:17 http://172.17.0.1/private stretch-staging/main armhf automake all 1:1.15-4 [735 kB]
Get:18 http://172.17.0.1/private stretch-staging/main armhf autopoint all 0.19.8.1-1 [433 kB]
Get:19 http://172.17.0.1/private stretch-staging/main armhf openssl armhf 1.0.2h-1 [667 kB]
Get:20 http://172.17.0.1/private stretch-staging/main armhf ca-certificates all 20160104 [200 kB]
Get:21 http://172.17.0.1/private stretch-staging/main armhf libcroco3 armhf 0.6.11-1 [131 kB]
Get:22 http://172.17.0.1/private stretch-staging/main armhf libunistring0 armhf 0.9.6+really0.9.3-0.1 [252 kB]
Get:23 http://172.17.0.1/private stretch-staging/main armhf gettext armhf 0.19.8.1-1 [1433 kB]
Get:24 http://172.17.0.1/private stretch-staging/main armhf intltool-debian all 0.35.0+20060710.4 [26.3 kB]
Get:25 http://172.17.0.1/private stretch-staging/main armhf po-debconf all 1.0.19 [249 kB]
Get:26 http://172.17.0.1/private stretch-staging/main armhf libarchive-zip-perl all 1.57-1 [95.1 kB]
Get:27 http://172.17.0.1/private stretch-staging/main armhf libfile-stripnondeterminism-perl all 0.019-1 [12.2 kB]
Get:28 http://172.17.0.1/private stretch-staging/main armhf libtimedate-perl all 2.3000-2 [42.2 kB]
Get:29 http://172.17.0.1/private stretch-staging/main armhf dh-strip-nondeterminism all 0.019-1 [7352 B]
Get:30 http://172.17.0.1/private stretch-staging/main armhf libtool all 2.4.6-0.1 [200 kB]
Get:31 http://172.17.0.1/private stretch-staging/main armhf dh-autoreconf all 12 [15.8 kB]
Get:32 http://172.17.0.1/private stretch-staging/main armhf debhelper all 9.20160403 [800 kB]
Get:33 http://172.17.0.1/private stretch-staging/main armhf golang-github-mattn-go-isatty-dev all 0.0.1-1 [3456 B]
Get:34 http://172.17.0.1/private stretch-staging/main armhf libsasl2-dev armhf 2.1.26.dfsg1-15 [293 kB]
Get:35 http://172.17.0.1/private stretch-staging/main armhf libseccomp-dev armhf 2.3.1-2 [57.8 kB]
Get:36 http://172.17.0.1/private stretch-staging/main armhf dh-golang all 1.18 [9278 B]
Get:37 http://172.17.0.1/private stretch-staging/main armhf golang-dns-dev all 0.0~git20160414.0.89d9c5e-1 [135 kB]
Get:38 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-circbuf-dev all 0.0~git20150827.0.bbbad09-1 [3650 B]
Get:39 http://172.17.0.1/private stretch-staging/main armhf golang-github-datadog-datadog-go-dev all 0.0~git20150930.0.b050cd8-1 [7034 B]
Get:40 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-metrics-dev all 0.0~git20160307.0.f303b03-1 [13.7 kB]
Get:41 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-radix-dev all 0.0~git20150602.0.fbd82e8-1 [6472 B]
Get:42 http://172.17.0.1/private stretch-staging/main armhf golang-github-bgentry-speakeasy-dev all 0.0~git20150902.0.36e9cfd-1 [4632 B]
Get:43 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-go-units-dev all 0.3.0-1 [11.8 kB]
Get:44 http://172.17.0.1/private stretch-staging/main armhf golang-github-elazarl-go-bindata-assetfs-dev all 0.0~git20151224.0.57eb5e1-1 [5088 B]
Get:45 http://172.17.0.1/private stretch-staging/main armhf golang-github-pmezard-go-difflib-dev all 0.0~git20160110.0.792786c-2 [11.6 kB]
Get:46 http://172.17.0.1/private stretch-staging/main armhf golang-github-sirupsen-logrus-dev all 0.10.0-2 [24.6 kB]
Get:47 http://172.17.0.1/private stretch-staging/main armhf golang-logrus-dev all 0.10.0-2 [3522 B]
Get:48 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonpointer-dev all 0.0~git20151027.0.e0fe6f6-1 [4418 B]
Get:49 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonreference-dev all 0.0~git20150808.0.e02fc20-1 [4424 B]
Get:50 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonschema-dev all 0.0~git20160323.0.93e72a7-1 [23.5 kB]
Get:51 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-specs-dev all 0.5.0-1 [12.1 kB]
Get:52 http://172.17.0.1/private stretch-staging/main armhf golang-github-seccomp-libseccomp-golang-dev all 0.0~git20150813.0.1b506fc-1 [12.9 kB]
Get:53 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netns-dev all 0.0~git20150710.0.604eaf1-1 [5448 B]
Get:54 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netlink-dev all 0.0~git20160306.0.4fdf23c-1 [50.5 kB]
Get:55 http://172.17.0.1/private stretch-staging/main armhf golang-gocapability-dev all 0.0~git20150716.0.2c00dae-1 [10.8 kB]
Get:56 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-runc-dev all 0.1.0+dfsg1-1 [144 kB]
Get:57 http://172.17.0.1/private stretch-staging/main armhf golang-github-gorilla-mux-dev all 1.1-2 [26.0 kB]
Get:58 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-text-dev all 0.0~git20160606.0.a4d77b4-1 [2158 kB]
Get:59 http://172.17.0.1/private stretch-staging/main armhf golang-x-text-dev all 0.0~git20160606.0.a4d77b4-1 [2726 B]
Get:60 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-net-dev all 1:0.0+git20160518.b3e9c8f+dfsg-2 [520 kB]
Get:61 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-sys-dev all 0.0~git20160611.0.7f918dd-1 [183 kB]
Get:62 http://172.17.0.1/private stretch-staging/main armhf golang-github-fsouza-go-dockerclient-dev all 0.0+git20160316-2 [169 kB]
Get:63 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-errwrap-dev all 0.0~git20141028.0.7554cd9-1 [9692 B]
Get:64 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-cleanhttp-dev all 0.0~git20160217.0.875fb67-1 [8256 B]
Get:65 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-checkpoint-dev all 0.0~git20151022.0.e4b2dc3-1 [11.2 kB]
Get:66 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-golang-lru-dev all 0.0~git20160207.0.a0d98a5-1 [12.9 kB]
Get:67 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-uuid-dev all 0.0~git20160218.0.6994546-1 [7306 B]
Get:68 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-immutable-radix-dev all 0.0~git20160222.0.8e8ed81-1 [13.4 kB]
Get:69 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-memdb-dev all 0.0~git20160301.0.98f52f5-1 [19.1 kB]
Get:70 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-msgpack-dev all 0.0~git20130605.792643-1 [20.3 kB]
Get:71 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-vmihailenco-msgpack.v2-dev all 2.5.1-1 [18.3 kB]
Get:72 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-tomb.v2-dev all 0.0~git20140626.14b3d72-1 [5140 B]
Get:73 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-mgo.v2-dev all 2015.12.06-1 [138 kB]
Get:74 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-msgpack-dev all 0.0~git20150518.0.fa3f638-1 [42.3 kB]
Get:75 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-multierror-dev all 0.0~git20150916.0.d30f099-1 [9274 B]
Get:76 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-reap-dev all 0.0~git20160113.0.2d85522-1 [9084 B]
Get:77 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-syslog-dev all 0.0~git20150218.0.42a2b57-1 [5336 B]
Get:78 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-uuid-dev all 0.0~git20160311.0.d610f28-1 [7646 B]
Get:79 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-hcl-dev all 0.0~git20160607.0.d7400db-1 [50.7 kB]
Get:80 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-reflectwalk-dev all 0.0~git20150527.0.eecf4c7-1 [5374 B]
Get:81 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-hil-dev all 0.0~git20160326.0.40da60f-1 [29.2 kB]
Get:82 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-logutils-dev all 0.0~git20150609.0.0dc08b1-1 [8150 B]
Get:83 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-mdns-dev all 0.0~git20150317.0.2b439d3-1 [10.9 kB]
Get:84 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-memberlist-dev all 0.0~git20160329.0.88ac4de-1 [50.2 kB]
Get:85 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-net-rpc-msgpackrpc-dev all 0.0~git20151116.0.a14192a-1 [4168 B]
Get:86 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-dev all 0.0~git20160317.0.3359516-1 [52.2 kB]
Get:87 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-boltdb-dev all 0.0~git20150201.d1e82c1-1 [9744 B]
Get:88 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-yamux-dev all 0.0~git20151129.0.df94978-1 [20.0 kB]
Get:89 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-scada-client-dev all 0.0~git20150828.0.84989fd-1 [17.4 kB]
Get:90 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-cli-dev all 0.0~git20160203.0.5c87c51-1 [16.9 kB]
Get:91 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-mapstructure-dev all 0.0~git20160212.0.d2dd026-1 [14.8 kB]
Get:92 http://172.17.0.1/private stretch-staging/main armhf golang-github-ryanuber-columnize-dev all 2.1.0-1 [5140 B]
Get:93 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-serf-dev all 0.7.0~ds1-1 [110 kB]
Get:94 http://172.17.0.1/private stretch-staging/main armhf golang-github-inconshreveable-muxado-dev all 0.0~git20140312.0.f693c7e-1 [26.4 kB]
Get:95 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-copystructure-dev all 0.0~git20160128.0.80adcec-1 [5098 B]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 14.5 MB in 46s (310 kB/s)
Selecting previously unselected package groff-base.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13899 files and directories currently installed.)
Preparing to unpack .../groff-base_1.22.3-7_armhf.deb ...
Unpacking groff-base (1.22.3-7) ...
Selecting previously unselected package libbsd0:armhf.
Preparing to unpack .../libbsd0_0.8.3-1_armhf.deb ...
Unpacking libbsd0:armhf (0.8.3-1) ...
Selecting previously unselected package bsdmainutils.
Preparing to unpack .../bsdmainutils_9.0.10_armhf.deb ...
Unpacking bsdmainutils (9.0.10) ...
Selecting previously unselected package libpipeline1:armhf.
Preparing to unpack .../libpipeline1_1.4.1-2_armhf.deb ...
Unpacking libpipeline1:armhf (1.4.1-2) ...
Selecting previously unselected package man-db.
Preparing to unpack .../man-db_2.7.5-1_armhf.deb ...
Unpacking man-db (2.7.5-1) ...
Selecting previously unselected package libssl1.0.2:armhf.
Preparing to unpack .../libssl1.0.2_1.0.2h-1_armhf.deb ...
Unpacking libssl1.0.2:armhf (1.0.2h-1) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../libmagic1_1%3a5.25-2_armhf.deb ...
Unpacking libmagic1:armhf (1:5.25-2) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2_armhf.deb ...
Unpacking file (1:5.25-2) ...
Selecting previously unselected package gettext-base.
Preparing to unpack .../gettext-base_0.19.8.1-1_armhf.deb ...
Unpacking gettext-base (0.19.8.1-1) ...
Selecting previously unselected package libsasl2-modules-db:armhf.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-modules-db:armhf (2.1.26.dfsg1-15) ...
Selecting previously unselected package libsasl2-2:armhf.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-2:armhf (2.1.26.dfsg1-15) ...
Selecting previously unselected package libicu55:armhf.
Preparing to unpack .../libicu55_55.1-7_armhf.deb ...
Unpacking libicu55:armhf (55.1-7) ...
Selecting previously unselected package libxml2:armhf.
Preparing to unpack .../libxml2_2.9.3+dfsg1-1.2_armhf.deb ...
Unpacking libxml2:armhf (2.9.3+dfsg1-1.2) ...
Selecting previously unselected package libsigsegv2:armhf.
Preparing to unpack .../libsigsegv2_2.10-5_armhf.deb ...
Unpacking libsigsegv2:armhf (2.10-5) ...
Selecting previously unselected package m4.
Preparing to unpack .../archives/m4_1.4.17-5_armhf.deb ...
Unpacking m4 (1.4.17-5) ...
Selecting previously unselected package autoconf.
Preparing to unpack .../autoconf_2.69-10_all.deb ...
Unpacking autoconf (2.69-10) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../autotools-dev_20160430.1_all.deb ...
Unpacking autotools-dev (20160430.1) ...
Selecting previously unselected package automake.
Preparing to unpack .../automake_1%3a1.15-4_all.deb ...
Unpacking automake (1:1.15-4) ...
Selecting previously unselected package autopoint.
Preparing to unpack .../autopoint_0.19.8.1-1_all.deb ...
Unpacking autopoint (0.19.8.1-1) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2h-1_armhf.deb ...
Unpacking openssl (1.0.2h-1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104_all.deb ...
Unpacking ca-certificates (20160104) ...
Selecting previously unselected package libffi6:armhf.
Preparing to unpack .../libffi6_3.2.1-4_armhf.deb ...
Unpacking libffi6:armhf (3.2.1-4) ...
Selecting previously unselected package libglib2.0-0:armhf.
Preparing to unpack .../libglib2.0-0_2.48.1-1_armhf.deb ...
Unpacking libglib2.0-0:armhf (2.48.1-1) ...
Selecting previously unselected package libcroco3:armhf.
Preparing to unpack .../libcroco3_0.6.11-1_armhf.deb ...
Unpacking libcroco3:armhf (0.6.11-1) ...
Selecting previously unselected package libunistring0:armhf.
Preparing to unpack .../libunistring0_0.9.6+really0.9.3-0.1_armhf.deb ...
Unpacking libunistring0:armhf (0.9.6+really0.9.3-0.1) ...
Selecting previously unselected package gettext.
Preparing to unpack .../gettext_0.19.8.1-1_armhf.deb ...
Unpacking gettext (0.19.8.1-1) ...
Selecting previously unselected package intltool-debian.
Preparing to unpack .../intltool-debian_0.35.0+20060710.4_all.deb ...
Unpacking intltool-debian (0.35.0+20060710.4) ...
Selecting previously unselected package po-debconf.
Preparing to unpack .../po-debconf_1.0.19_all.deb ...
Unpacking po-debconf (1.0.19) ...
Selecting previously unselected package libarchive-zip-perl.
Preparing to unpack .../libarchive-zip-perl_1.57-1_all.deb ...
Unpacking libarchive-zip-perl (1.57-1) ...
Selecting previously unselected package libfile-stripnondeterminism-perl.
Preparing to unpack .../libfile-stripnondeterminism-perl_0.019-1_all.deb ...
Unpacking libfile-stripnondeterminism-perl (0.019-1) ...
Selecting previously unselected package libtimedate-perl.
Preparing to unpack .../libtimedate-perl_2.3000-2_all.deb ...
Unpacking libtimedate-perl (2.3000-2) ...
Selecting previously unselected package dh-strip-nondeterminism.
Preparing to unpack .../dh-strip-nondeterminism_0.019-1_all.deb ...
Unpacking dh-strip-nondeterminism (0.019-1) ...
Selecting previously unselected package libtool.
Preparing to unpack .../libtool_2.4.6-0.1_all.deb ...
Unpacking libtool (2.4.6-0.1) ...
Selecting previously unselected package dh-autoreconf.
Preparing to unpack .../dh-autoreconf_12_all.deb ...
Unpacking dh-autoreconf (12) ...
Selecting previously unselected package debhelper.
Preparing to unpack .../debhelper_9.20160403_all.deb ...
Unpacking debhelper (9.20160403) ...
Selecting previously unselected package golang-1.6-src.
Preparing to unpack .../golang-1.6-src_1.6.2-1+rpi1_armhf.deb ...
Unpacking golang-1.6-src (1.6.2-1+rpi1) ...
Selecting previously unselected package golang-1.6-go.
Preparing to unpack .../golang-1.6-go_1.6.2-1+rpi1_armhf.deb ...
Unpacking golang-1.6-go (1.6.2-1+rpi1) ...
Selecting previously unselected package golang-github-mattn-go-isatty-dev.
Preparing to unpack .../golang-github-mattn-go-isatty-dev_0.0.1-1_all.deb ...
Unpacking golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Selecting previously unselected package golang-src.
Preparing to unpack .../golang-src_2%3a1.6.1+1+b3_armhf.deb ...
Unpacking golang-src (2:1.6.1+1+b3) ...
Selecting previously unselected package golang-go.
Preparing to unpack .../golang-go_2%3a1.6.1+1+b3_armhf.deb ...
Unpacking golang-go (2:1.6.1+1+b3) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../libjs-jquery_1.12.3-1_all.deb ...
Unpacking libjs-jquery (1.12.3-1) ...
Selecting previously unselected package libjs-jquery-ui.
Preparing to unpack .../libjs-jquery-ui_1.10.1+dfsg-1_all.deb ...
Unpacking libjs-jquery-ui (1.10.1+dfsg-1) ...
Selecting previously unselected package libprotobuf9v5:armhf.
Preparing to unpack .../libprotobuf9v5_2.6.1-2_armhf.deb ...
Unpacking libprotobuf9v5:armhf (2.6.1-2) ...
Selecting previously unselected package libprotoc9v5:armhf.
Preparing to unpack .../libprotoc9v5_2.6.1-2_armhf.deb ...
Unpacking libprotoc9v5:armhf (2.6.1-2) ...
Selecting previously unselected package libsasl2-dev.
Preparing to unpack .../libsasl2-dev_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-dev (2.1.26.dfsg1-15) ...
Selecting previously unselected package libseccomp-dev:armhf.
Preparing to unpack .../libseccomp-dev_2.3.1-2_armhf.deb ...
Unpacking libseccomp-dev:armhf (2.3.1-2) ...
Selecting previously unselected package libsystemd-dev:armhf.
Preparing to unpack .../libsystemd-dev_230-2_armhf.deb ...
Unpacking libsystemd-dev:armhf (230-2) ...
Selecting previously unselected package pkg-config.
Preparing to unpack .../pkg-config_0.29-4_armhf.deb ...
Unpacking pkg-config (0.29-4) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../protobuf-compiler_2.6.1-2_armhf.deb ...
Unpacking protobuf-compiler (2.6.1-2) ...
Selecting previously unselected package dh-golang.
Preparing to unpack .../dh-golang_1.18_all.deb ...
Unpacking dh-golang (1.18) ...
Selecting previously unselected package golang-gopkg-check.v1-dev.
Preparing to unpack .../golang-gopkg-check.v1-dev_0.0+git20160105.0.4f90aea-2_all.deb ...
Unpacking golang-gopkg-check.v1-dev (0.0+git20160105.0.4f90aea-2) ...
Selecting previously unselected package golang-check.v1-dev.
Preparing to unpack .../golang-check.v1-dev_0.0+git20160105.0.4f90aea-2_all.deb ...
Unpacking golang-check.v1-dev (0.0+git20160105.0.4f90aea-2) ...
Selecting previously unselected package golang-dbus-dev.
Preparing to unpack .../golang-dbus-dev_3-1_all.deb ...
Unpacking golang-dbus-dev (3-1) ...
Selecting previously unselected package golang-dns-dev.
Preparing to unpack .../golang-dns-dev_0.0~git20160414.0.89d9c5e-1_all.deb ...
Unpacking golang-dns-dev (0.0~git20160414.0.89d9c5e-1) ...
Selecting previously unselected package golang-github-armon-circbuf-dev.
Preparing to unpack .../golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1_all.deb ...
Unpacking golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Selecting previously unselected package golang-goprotobuf-dev.
Preparing to unpack .../golang-goprotobuf-dev_0.0~git20160425.7cc19b7-1_armhf.deb ...
Unpacking golang-goprotobuf-dev (0.0~git20160425.7cc19b7-1) ...
Selecting previously unselected package golang-github-prometheus-client-model-dev.
Preparing to unpack .../golang-github-prometheus-client-model-dev_0.0.2+git20150212.12.fa8ad6f-1_all.deb ...
Unpacking golang-github-prometheus-client-model-dev (0.0.2+git20150212.12.fa8ad6f-1) ...
Selecting previously unselected package golang-procfs-dev.
Preparing to unpack .../golang-procfs-dev_0+git20160411.abf152e-1_all.deb ...
Unpacking golang-procfs-dev (0+git20160411.abf152e-1) ...
Selecting previously unselected package golang-protobuf-extensions-dev.
Preparing to unpack .../golang-protobuf-extensions-dev_1.0.0-1_all.deb ...
Unpacking golang-protobuf-extensions-dev (1.0.0-1) ...
Selecting previously unselected package golang-prometheus-client-dev.
Preparing to unpack .../golang-prometheus-client-dev_0.7.0+ds-4_all.deb ...
Unpacking golang-prometheus-client-dev (0.7.0+ds-4) ...
Selecting previously unselected package golang-github-datadog-datadog-go-dev.
Preparing to unpack .../golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1_all.deb ...
Unpacking golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Selecting previously unselected package golang-github-armon-go-metrics-dev.
Preparing to unpack .../golang-github-armon-go-metrics-dev_0.0~git20160307.0.f303b03-1_all.deb ...
Unpacking golang-github-armon-go-metrics-dev (0.0~git20160307.0.f303b03-1) ...
Selecting previously unselected package golang-github-armon-go-radix-dev.
Preparing to unpack .../golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1_all.deb ...
Unpacking golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Selecting previously unselected package golang-github-bgentry-speakeasy-dev.
Preparing to unpack .../golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1_all.deb ...
Unpacking golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Selecting previously unselected package golang-github-codegangsta-cli-dev.
Preparing to unpack .../golang-github-codegangsta-cli-dev_0.0~git20151221-1_all.deb ...
Unpacking golang-github-codegangsta-cli-dev (0.0~git20151221-1) ...
Selecting previously unselected package golang-github-boltdb-bolt-dev.
Preparing to unpack .../golang-github-boltdb-bolt-dev_1.2.1-1_all.deb ...
Unpacking golang-github-boltdb-bolt-dev (1.2.1-1) ...
Selecting previously unselected package golang-github-coreos-go-systemd-dev.
Preparing to unpack .../golang-github-coreos-go-systemd-dev_5-1_all.deb ...
Unpacking golang-github-coreos-go-systemd-dev (5-1) ...
Selecting previously unselected package golang-github-davecgh-go-spew-dev.
Preparing to unpack .../golang-github-davecgh-go-spew-dev_0.0~git20151106.5215b55-1_all.deb ...
Unpacking golang-github-davecgh-go-spew-dev (0.0~git20151106.5215b55-1) ...
Selecting previously unselected package golang-github-docker-go-units-dev.
Preparing to unpack .../golang-github-docker-go-units-dev_0.3.0-1_all.deb ...
Unpacking golang-github-docker-go-units-dev (0.3.0-1) ...
Selecting previously unselected package golang-github-elazarl-go-bindata-assetfs-dev.
Preparing to unpack .../golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1_all.deb ...
Unpacking golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Selecting previously unselected package golang-github-pmezard-go-difflib-dev.
Preparing to unpack .../golang-github-pmezard-go-difflib-dev_0.0~git20160110.0.792786c-2_all.deb ...
Unpacking golang-github-pmezard-go-difflib-dev (0.0~git20160110.0.792786c-2) ...
Selecting previously unselected package golang-github-stretchr-objx-dev.
Preparing to unpack .../golang-github-stretchr-objx-dev_0.0~git20150928.0.1a9d0bb-1_all.deb ...
Unpacking golang-github-stretchr-objx-dev (0.0~git20150928.0.1a9d0bb-1) ...
Selecting previously unselected package golang-github-stretchr-testify-dev.
Preparing to unpack .../golang-github-stretchr-testify-dev_1.1.3+git20160418.12.c5d7a69+ds-1_all.deb ...
Unpacking golang-github-stretchr-testify-dev (1.1.3+git20160418.12.c5d7a69+ds-1) ...
Selecting previously unselected package golang-github-sirupsen-logrus-dev.
Preparing to unpack .../golang-github-sirupsen-logrus-dev_0.10.0-2_all.deb ...
Unpacking golang-github-sirupsen-logrus-dev (0.10.0-2) ...
Selecting previously unselected package golang-logrus-dev.
Preparing to unpack .../golang-logrus-dev_0.10.0-2_all.deb ...
Unpacking golang-logrus-dev (0.10.0-2) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonpointer-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonpointer-dev_0.0~git20151027.0.e0fe6f6-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonpointer-dev (0.0~git20151027.0.e0fe6f6-1) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonreference-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonreference-dev_0.0~git20150808.0.e02fc20-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonreference-dev (0.0~git20150808.0.e02fc20-1) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonschema-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonschema-dev_0.0~git20160323.0.93e72a7-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonschema-dev (0.0~git20160323.0.93e72a7-1) ...
Selecting previously unselected package golang-github-opencontainers-specs-dev.
Preparing to unpack .../golang-github-opencontainers-specs-dev_0.5.0-1_all.deb ...
Unpacking golang-github-opencontainers-specs-dev (0.5.0-1) ...
Selecting previously unselected package golang-github-seccomp-libseccomp-golang-dev.
Preparing to unpack .../golang-github-seccomp-libseccomp-golang-dev_0.0~git20150813.0.1b506fc-1_all.deb ...
Unpacking golang-github-seccomp-libseccomp-golang-dev (0.0~git20150813.0.1b506fc-1) ...
Selecting previously unselected package golang-github-vishvananda-netns-dev.
Preparing to unpack .../golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1_all.deb ...
Unpacking golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Selecting previously unselected package golang-github-vishvananda-netlink-dev.
Preparing to unpack .../golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1_all.deb ...
Unpacking golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Selecting previously unselected package golang-gocapability-dev.
Preparing to unpack .../golang-gocapability-dev_0.0~git20150716.0.2c00dae-1_all.deb ...
Unpacking golang-gocapability-dev (0.0~git20150716.0.2c00dae-1) ...
Selecting previously unselected package golang-github-opencontainers-runc-dev.
Preparing to unpack .../golang-github-opencontainers-runc-dev_0.1.0+dfsg1-1_all.deb ...
Unpacking golang-github-opencontainers-runc-dev (0.1.0+dfsg1-1) ...
Selecting previously unselected package golang-github-gorilla-context-dev.
Preparing to unpack .../golang-github-gorilla-context-dev_1.1-1_all.deb ...
Unpacking golang-github-gorilla-context-dev (1.1-1) ...
Selecting previously unselected package golang-github-gorilla-mux-dev.
Preparing to unpack .../golang-github-gorilla-mux-dev_1.1-2_all.deb ...
Unpacking golang-github-gorilla-mux-dev (1.1-2) ...
Selecting previously unselected package golang-golang-x-text-dev.
Preparing to unpack .../golang-golang-x-text-dev_0.0~git20160606.0.a4d77b4-1_all.deb ...
Unpacking golang-golang-x-text-dev (0.0~git20160606.0.a4d77b4-1) ...
Selecting previously unselected package golang-x-text-dev.
Preparing to unpack .../golang-x-text-dev_0.0~git20160606.0.a4d77b4-1_all.deb ...
Unpacking golang-x-text-dev (0.0~git20160606.0.a4d77b4-1) ...
Selecting previously unselected package golang-golang-x-net-dev.
Preparing to unpack .../golang-golang-x-net-dev_1%3a0.0+git20160518.b3e9c8f+dfsg-2_all.deb ...
Unpacking golang-golang-x-net-dev (1:0.0+git20160518.b3e9c8f+dfsg-2) ...
Selecting previously unselected package golang-golang-x-sys-dev.
Preparing to unpack .../golang-golang-x-sys-dev_0.0~git20160611.0.7f918dd-1_all.deb ...
Unpacking golang-golang-x-sys-dev (0.0~git20160611.0.7f918dd-1) ...
Selecting previously unselected package golang-github-fsouza-go-dockerclient-dev.
Preparing to unpack .../golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-2_all.deb ...
Unpacking golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-2) ...
Selecting previously unselected package golang-github-hashicorp-errwrap-dev.
Preparing to unpack .../golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1_all.deb ...
Unpacking golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Selecting previously unselected package golang-github-hashicorp-go-cleanhttp-dev.
Preparing to unpack .../golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1_all.deb ...
Unpacking golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Selecting previously unselected package golang-github-hashicorp-go-checkpoint-dev.
Preparing to unpack .../golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1_all.deb ...
Unpacking golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Selecting previously unselected package golang-github-hashicorp-golang-lru-dev.
Preparing to unpack .../golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1_all.deb ...
Unpacking golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Selecting previously unselected package golang-github-hashicorp-uuid-dev.
Preparing to unpack .../golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1_all.deb ...
Unpacking golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Selecting previously unselected package golang-github-hashicorp-go-immutable-radix-dev.
Preparing to unpack .../golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1_all.deb ...
Unpacking golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Selecting previously unselected package golang-github-hashicorp-go-memdb-dev.
Preparing to unpack .../golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1_all.deb ...
Unpacking golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Selecting previously unselected package golang-github-ugorji-go-msgpack-dev.
Preparing to unpack .../golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1_all.deb ...
Unpacking golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Selecting previously unselected package golang-github-ugorji-go-codec-dev.
Preparing to unpack .../golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1_all.deb ...
Unpacking golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Selecting previously unselected package golang-gopkg-vmihailenco-msgpack.v2-dev.
Preparing to unpack .../golang-gopkg-vmihailenco-msgpack.v2-dev_2.5.1-1_all.deb ...
Unpacking golang-gopkg-vmihailenco-msgpack.v2-dev (2.5.1-1) ...
Selecting previously unselected package golang-gopkg-tomb.v2-dev.
Preparing to unpack .../golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1_all.deb ...
Unpacking golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Selecting previously unselected package golang-gopkg-mgo.v2-dev.
Preparing to unpack .../golang-gopkg-mgo.v2-dev_2015.12.06-1_all.deb ...
Unpacking golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Selecting previously unselected package golang-github-hashicorp-go-msgpack-dev.
Preparing to unpack .../golang-github-hashicorp-go-msgpack-dev_0.0~git20150518.0.fa3f638-1_all.deb ...
Unpacking golang-github-hashicorp-go-msgpack-dev (0.0~git20150518.0.fa3f638-1) ...
Selecting previously unselected package golang-github-hashicorp-go-multierror-dev.
Preparing to unpack .../golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1_all.deb ...
Unpacking golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Selecting previously unselected package golang-github-hashicorp-go-reap-dev.
Preparing to unpack .../golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1_all.deb ...
Unpacking golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Selecting previously unselected package golang-github-hashicorp-go-syslog-dev.
Preparing to unpack .../golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1_all.deb ...
Unpacking golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Selecting previously unselected package golang-github-hashicorp-go-uuid-dev.
Preparing to unpack .../golang-github-hashicorp-go-uuid-dev_0.0~git20160311.0.d610f28-1_all.deb ...
Unpacking golang-github-hashicorp-go-uuid-dev (0.0~git20160311.0.d610f28-1) ...
Selecting previously unselected package golang-github-hashicorp-hcl-dev.
Preparing to unpack .../golang-github-hashicorp-hcl-dev_0.0~git20160607.0.d7400db-1_all.deb ...
Unpacking golang-github-hashicorp-hcl-dev (0.0~git20160607.0.d7400db-1) ...
Selecting previously unselected package golang-golang-x-tools-dev.
Preparing to unpack .../golang-golang-x-tools-dev_1%3a0.0~git20160315.0.f42ec61-2_all.deb ...
Unpacking golang-golang-x-tools-dev (1:0.0~git20160315.0.f42ec61-2) ...
Selecting previously unselected package golang-golang-x-tools.
Preparing to unpack .../golang-golang-x-tools_1%3a0.0~git20160315.0.f42ec61-2_armhf.deb ...
Unpacking golang-golang-x-tools (1:0.0~git20160315.0.f42ec61-2) ...
Selecting previously unselected package golang-github-mitchellh-reflectwalk-dev.
Preparing to unpack .../golang-github-mitchellh-reflectwalk-dev_0.0~git20150527.0.eecf4c7-1_all.deb ...
Unpacking golang-github-mitchellh-reflectwalk-dev (0.0~git20150527.0.eecf4c7-1) ...
Selecting previously unselected package golang-github-hashicorp-hil-dev.
Preparing to unpack .../golang-github-hashicorp-hil-dev_0.0~git20160326.0.40da60f-1_all.deb ...
Unpacking golang-github-hashicorp-hil-dev (0.0~git20160326.0.40da60f-1) ...
Selecting previously unselected package golang-github-hashicorp-logutils-dev.
Preparing to unpack .../golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1_all.deb ...
Unpacking golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Selecting previously unselected package golang-github-hashicorp-mdns-dev.
Preparing to unpack .../golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1_all.deb ...
Unpacking golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Selecting previously unselected package golang-github-hashicorp-memberlist-dev.
Preparing to unpack .../golang-github-hashicorp-memberlist-dev_0.0~git20160329.0.88ac4de-1_all.deb ...
Unpacking golang-github-hashicorp-memberlist-dev (0.0~git20160329.0.88ac4de-1) ...
Selecting previously unselected package golang-github-hashicorp-net-rpc-msgpackrpc-dev.
Preparing to unpack .../golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1_all.deb ...
Unpacking golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-dev.
Preparing to unpack .../golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1_all.deb ...
Unpacking golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-boltdb-dev.
Preparing to unpack .../golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1_all.deb ...
Unpacking golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Selecting previously unselected package golang-github-hashicorp-yamux-dev.
Preparing to unpack .../golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1_all.deb ...
Unpacking golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Selecting previously unselected package golang-github-hashicorp-scada-client-dev.
Preparing to unpack .../golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1_all.deb ...
Unpacking golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Selecting previously unselected package golang-github-mitchellh-cli-dev.
Preparing to unpack .../golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1_all.deb ...
Unpacking golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Selecting previously unselected package golang-github-mitchellh-mapstructure-dev.
Preparing to unpack .../golang-github-mitchellh-mapstructure-dev_0.0~git20160212.0.d2dd026-1_all.deb ...
Unpacking golang-github-mitchellh-mapstructure-dev (0.0~git20160212.0.d2dd026-1) ...
Selecting previously unselected package golang-github-ryanuber-columnize-dev.
Preparing to unpack .../golang-github-ryanuber-columnize-dev_2.1.0-1_all.deb ...
Unpacking golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Selecting previously unselected package golang-github-hashicorp-serf-dev.
Preparing to unpack .../golang-github-hashicorp-serf-dev_0.7.0~ds1-1_all.deb ...
Unpacking golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Selecting previously unselected package golang-github-inconshreveable-muxado-dev.
Preparing to unpack .../golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1_all.deb ...
Unpacking golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Selecting previously unselected package golang-github-mitchellh-copystructure-dev.
Preparing to unpack .../golang-github-mitchellh-copystructure-dev_0.0~git20160128.0.80adcec-1_all.deb ...
Unpacking golang-github-mitchellh-copystructure-dev (0.0~git20160128.0.80adcec-1) ...
Selecting previously unselected package sbuild-build-depends-consul-dummy.
Preparing to unpack .../sbuild-build-depends-consul-dummy.deb ...
Unpacking sbuild-build-depends-consul-dummy (0.invalid.0) ...
Processing triggers for libc-bin (2.22-9) ...
Setting up groff-base (1.22.3-7) ...
Setting up libbsd0:armhf (0.8.3-1) ...
Setting up bsdmainutils (9.0.10) ...
update-alternatives: using /usr/bin/bsd-write to provide /usr/bin/write (write) in auto mode
update-alternatives: using /usr/bin/bsd-from to provide /usr/bin/from (from) in auto mode
Setting up libpipeline1:armhf (1.4.1-2) ...
Setting up man-db (2.7.5-1) ...
Not building database; man-db/auto-update is not 'true'.
Setting up libssl1.0.2:armhf (1.0.2h-1) ...
Setting up libmagic1:armhf (1:5.25-2) ...
Setting up file (1:5.25-2) ...
Setting up gettext-base (0.19.8.1-1) ...
Setting up libsasl2-modules-db:armhf (2.1.26.dfsg1-15) ...
Setting up libsasl2-2:armhf (2.1.26.dfsg1-15) ...
Setting up libicu55:armhf (55.1-7) ...
Setting up libxml2:armhf (2.9.3+dfsg1-1.2) ...
Setting up libsigsegv2:armhf (2.10-5) ...
Setting up m4 (1.4.17-5) ...
Setting up autoconf (2.69-10) ...
Setting up autotools-dev (20160430.1) ...
Setting up automake (1:1.15-4) ...
update-alternatives: using /usr/bin/automake-1.15 to provide /usr/bin/automake (automake) in auto mode
Setting up autopoint (0.19.8.1-1) ...
Setting up openssl (1.0.2h-1) ...
Setting up ca-certificates (20160104) ...
Setting up libffi6:armhf (3.2.1-4) ...
Setting up libglib2.0-0:armhf (2.48.1-1) ...
No schema files found: doing nothing.
Setting up libcroco3:armhf (0.6.11-1) ...
Setting up libunistring0:armhf (0.9.6+really0.9.3-0.1) ...
Setting up gettext (0.19.8.1-1) ...
Setting up intltool-debian (0.35.0+20060710.4) ...
Setting up po-debconf (1.0.19) ...
Setting up libarchive-zip-perl (1.57-1) ...
Setting up libfile-stripnondeterminism-perl (0.019-1) ...
Setting up libtimedate-perl (2.3000-2) ...
Setting up libtool (2.4.6-0.1) ...
Setting up golang-1.6-src (1.6.2-1+rpi1) ...
Setting up golang-1.6-go (1.6.2-1+rpi1) ...
Setting up golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Setting up golang-src (2:1.6.1+1+b3) ...
Setting up golang-go (2:1.6.1+1+b3) ...
Setting up libjs-jquery (1.12.3-1) ...
Setting up libjs-jquery-ui (1.10.1+dfsg-1) ...
Setting up libprotobuf9v5:armhf (2.6.1-2) ...
Setting up libprotoc9v5:armhf (2.6.1-2) ...
Setting up libsasl2-dev (2.1.26.dfsg1-15) ...
Setting up libseccomp-dev:armhf (2.3.1-2) ...
Setting up libsystemd-dev:armhf (230-2) ...
Setting up pkg-config (0.29-4) ...
Setting up protobuf-compiler (2.6.1-2) ...
Setting up golang-gopkg-check.v1-dev (0.0+git20160105.0.4f90aea-2) ...
Setting up golang-check.v1-dev (0.0+git20160105.0.4f90aea-2) ...
Setting up golang-dbus-dev (3-1) ...
Setting up golang-dns-dev (0.0~git20160414.0.89d9c5e-1) ...
Setting up golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Setting up golang-goprotobuf-dev (0.0~git20160425.7cc19b7-1) ...
Setting up golang-github-prometheus-client-model-dev (0.0.2+git20150212.12.fa8ad6f-1) ...
Setting up golang-procfs-dev (0+git20160411.abf152e-1) ...
Setting up golang-protobuf-extensions-dev (1.0.0-1) ...
Setting up golang-prometheus-client-dev (0.7.0+ds-4) ...
Setting up golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Setting up golang-github-armon-go-metrics-dev (0.0~git20160307.0.f303b03-1) ...
Setting up golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Setting up golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Setting up golang-github-codegangsta-cli-dev (0.0~git20151221-1) ...
Setting up golang-github-boltdb-bolt-dev (1.2.1-1) ...
Setting up golang-github-coreos-go-systemd-dev (5-1) ...
Setting up golang-github-davecgh-go-spew-dev (0.0~git20151106.5215b55-1) ...
Setting up golang-github-docker-go-units-dev (0.3.0-1) ...
Setting up golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Setting up golang-github-pmezard-go-difflib-dev (0.0~git20160110.0.792786c-2) ...
Setting up golang-github-stretchr-objx-dev (0.0~git20150928.0.1a9d0bb-1) ...
Setting up golang-github-stretchr-testify-dev (1.1.3+git20160418.12.c5d7a69+ds-1) ...
Setting up golang-github-sirupsen-logrus-dev (0.10.0-2) ...
Setting up golang-logrus-dev (0.10.0-2) ...
Setting up golang-github-xeipuuv-gojsonpointer-dev (0.0~git20151027.0.e0fe6f6-1) ...
Setting up golang-github-xeipuuv-gojsonreference-dev (0.0~git20150808.0.e02fc20-1) ...
Setting up golang-github-xeipuuv-gojsonschema-dev (0.0~git20160323.0.93e72a7-1) ...
Setting up golang-github-opencontainers-specs-dev (0.5.0-1) ...
Setting up golang-github-seccomp-libseccomp-golang-dev (0.0~git20150813.0.1b506fc-1) ...
Setting up golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Setting up golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Setting up golang-gocapability-dev (0.0~git20150716.0.2c00dae-1) ...
Setting up golang-github-opencontainers-runc-dev (0.1.0+dfsg1-1) ...
Setting up golang-github-gorilla-context-dev (1.1-1) ...
Setting up golang-github-gorilla-mux-dev (1.1-2) ...
Setting up golang-golang-x-text-dev (0.0~git20160606.0.a4d77b4-1) ...
Setting up golang-x-text-dev (0.0~git20160606.0.a4d77b4-1) ...
Setting up golang-golang-x-net-dev (1:0.0+git20160518.b3e9c8f+dfsg-2) ...
Setting up golang-golang-x-sys-dev (0.0~git20160611.0.7f918dd-1) ...
Setting up golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-2) ...
Setting up golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Setting up golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Setting up golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Setting up golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Setting up golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Setting up golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Setting up golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Setting up golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Setting up golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Setting up golang-gopkg-vmihailenco-msgpack.v2-dev (2.5.1-1) ...
Setting up golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Setting up golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Setting up golang-github-hashicorp-go-msgpack-dev (0.0~git20150518.0.fa3f638-1) ...
Setting up golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Setting up golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Setting up golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Setting up golang-github-hashicorp-go-uuid-dev (0.0~git20160311.0.d610f28-1) ...
Setting up golang-github-hashicorp-hcl-dev (0.0~git20160607.0.d7400db-1) ...
Setting up golang-golang-x-tools-dev (1:0.0~git20160315.0.f42ec61-2) ...
Setting up golang-golang-x-tools (1:0.0~git20160315.0.f42ec61-2) ...
Setting up golang-github-mitchellh-reflectwalk-dev (0.0~git20150527.0.eecf4c7-1) ...
Setting up golang-github-hashicorp-hil-dev (0.0~git20160326.0.40da60f-1) ...
Setting up golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Setting up golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Setting up golang-github-hashicorp-memberlist-dev (0.0~git20160329.0.88ac4de-1) ...
Setting up golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Setting up golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Setting up golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Setting up golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Setting up golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Setting up golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Setting up golang-github-mitchellh-mapstructure-dev (0.0~git20160212.0.d2dd026-1) ...
Setting up golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Setting up golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Setting up golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Setting up golang-github-mitchellh-copystructure-dev (0.0~git20160128.0.80adcec-1) ...
Setting up dh-autoreconf (12) ...
Setting up debhelper (9.20160403) ...
Setting up dh-golang (1.18) ...
Setting up sbuild-build-depends-consul-dummy (0.invalid.0) ...
Setting up dh-strip-nondeterminism (0.019-1) ...
Processing triggers for libc-bin (2.22-9) ...
Processing triggers for ca-certificates (20160104) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Build environment                                                            |
+------------------------------------------------------------------------------+

Kernel: Linux 4.5.0-1-armmp armhf (armv7l)
Toolchain package versions: binutils_2.26-10 dpkg-dev_1.18.7 g++-5_5.3.1-21 gcc-5_5.3.1-21 libc6-dev_2.22-9 libstdc++-5-dev_5.3.1-21 libstdc++6_6.1.1-1+rpi1 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch
Package versions: adduser_3.114 apt_1.2.12 autoconf_2.69-10 automake_1:1.15-4 autopoint_0.19.8.1-1 autotools-dev_20160430.1 base-files_9.6+rpi1 base-passwd_3.5.39 bash_4.3-14 binutils_2.26-10 bsdmainutils_9.0.10 bsdutils_1:2.28-5 build-essential_11.7 bzip2_1.0.6-8 ca-certificates_20160104 console-setup_1.146 console-setup-linux_1.146 coreutils_8.25-2 cpio_2.11+dfsg-5 cpp_4:5.3.1-3 cpp-5_5.3.1-21 dash_0.5.8-2.2 debconf_1.5.59 debfoster_2.7-2 debhelper_9.20160403 debianutils_4.7 dh-autoreconf_12 dh-golang_1.18 dh-strip-nondeterminism_0.019-1 diffutils_1:3.3-3 dmsetup_2:1.02.124-1 dpkg_1.18.7 dpkg-dev_1.18.7 e2fslibs_1.43-3 e2fsprogs_1.43-3 fakeroot_1.20.2-2 file_1:5.25-2 findutils_4.6.0+git+20160126-2 fuse2fs_1.43-3 g++_4:5.3.1-3 g++-5_5.3.1-21 gcc_4:5.3.1-3 gcc-4.6-base_4.6.4-5+rpi1 gcc-4.7-base_4.7.3-11+rpi1 gcc-4.8-base_4.8.5-4 gcc-4.9-base_4.9.3-14 gcc-5_5.3.1-21 gcc-5-base_5.3.1-21 gcc-6-base_6.1.1-1+rpi1 gettext_0.19.8.1-1 gettext-base_0.19.8.1-1 gnupg_1.4.20-6 golang-1.6-go_1.6.2-1+rpi1 golang-1.6-src_1.6.2-1+rpi1 golang-check.v1-dev_0.0+git20160105.0.4f90aea-2 golang-dbus-dev_3-1 golang-dns-dev_0.0~git20160414.0.89d9c5e-1 golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1 golang-github-armon-go-metrics-dev_0.0~git20160307.0.f303b03-1 golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1 golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1 golang-github-boltdb-bolt-dev_1.2.1-1 golang-github-codegangsta-cli-dev_0.0~git20151221-1 golang-github-coreos-go-systemd-dev_5-1 golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1 golang-github-davecgh-go-spew-dev_0.0~git20151106.5215b55-1 golang-github-docker-go-units-dev_0.3.0-1 golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1 golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-2 golang-github-gorilla-context-dev_1.1-1 golang-github-gorilla-mux-dev_1.1-2 golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1 golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1 golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1 golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1 golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1 golang-github-hashicorp-go-msgpack-dev_0.0~git20150518.0.fa3f638-1 golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1 golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1 golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1 golang-github-hashicorp-go-uuid-dev_0.0~git20160311.0.d610f28-1 golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1 golang-github-hashicorp-hcl-dev_0.0~git20160607.0.d7400db-1 golang-github-hashicorp-hil-dev_0.0~git20160326.0.40da60f-1 golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1 golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1 golang-github-hashicorp-memberlist-dev_0.0~git20160329.0.88ac4de-1 golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1 golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1 golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1 golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1 golang-github-hashicorp-serf-dev_0.7.0~ds1-1 golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1 golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1 golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1 golang-github-mattn-go-isatty-dev_0.0.1-1 golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1 golang-github-mitchellh-copystructure-dev_0.0~git20160128.0.80adcec-1 golang-github-mitchellh-mapstructure-dev_0.0~git20160212.0.d2dd026-1 golang-github-mitchellh-reflectwalk-dev_0.0~git20150527.0.eecf4c7-1 golang-github-opencontainers-runc-dev_0.1.0+dfsg1-1 golang-github-opencontainers-specs-dev_0.5.0-1 golang-github-pmezard-go-difflib-dev_0.0~git20160110.0.792786c-2 golang-github-prometheus-client-model-dev_0.0.2+git20150212.12.fa8ad6f-1 golang-github-ryanuber-columnize-dev_2.1.0-1 golang-github-seccomp-libseccomp-golang-dev_0.0~git20150813.0.1b506fc-1 golang-github-sirupsen-logrus-dev_0.10.0-2 golang-github-stretchr-objx-dev_0.0~git20150928.0.1a9d0bb-1 golang-github-stretchr-testify-dev_1.1.3+git20160418.12.c5d7a69+ds-1 golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1 golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1 golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1 golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1 golang-github-xeipuuv-gojsonpointer-dev_0.0~git20151027.0.e0fe6f6-1 golang-github-xeipuuv-gojsonreference-dev_0.0~git20150808.0.e02fc20-1 golang-github-xeipuuv-gojsonschema-dev_0.0~git20160323.0.93e72a7-1 golang-go_2:1.6.1+1+b3 golang-gocapability-dev_0.0~git20150716.0.2c00dae-1 golang-golang-x-net-dev_1:0.0+git20160518.b3e9c8f+dfsg-2 golang-golang-x-sys-dev_0.0~git20160611.0.7f918dd-1 golang-golang-x-text-dev_0.0~git20160606.0.a4d77b4-1 golang-golang-x-tools_1:0.0~git20160315.0.f42ec61-2 golang-golang-x-tools-dev_1:0.0~git20160315.0.f42ec61-2 golang-gopkg-check.v1-dev_0.0+git20160105.0.4f90aea-2 golang-gopkg-mgo.v2-dev_2015.12.06-1 golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1 golang-gopkg-vmihailenco-msgpack.v2-dev_2.5.1-1 golang-goprotobuf-dev_0.0~git20160425.7cc19b7-1 golang-logrus-dev_0.10.0-2 golang-procfs-dev_0+git20160411.abf152e-1 golang-prometheus-client-dev_0.7.0+ds-4 golang-protobuf-extensions-dev_1.0.0-1 golang-src_2:1.6.1+1+b3 golang-x-text-dev_0.0~git20160606.0.a4d77b4-1 gpgv_1.4.20-6 grep_2.25-3 groff-base_1.22.3-7 gzip_1.6-5 hostname_3.17 ifupdown_0.8.13 init_1.34 init-system-helpers_1.34 initscripts_2.88dsf-59.4 insserv_1.14.0-5.3 intltool-debian_0.35.0+20060710.4 iproute2_4.3.0-1 kbd_2.0.3-2 keyboard-configuration_1.146 klibc-utils_2.0.4-9+rpi1 kmod_22-1.1 libacl1_2.2.52-3 libapparmor1_2.10-4 libapt-pkg5.0_1.2.12 libarchive-zip-perl_1.57-1 libasan2_5.3.1-21 libatm1_1:2.5.1-1.5 libatomic1_6.1.1-1+rpi1 libattr1_1:2.4.47-2 libaudit-common_1:2.5.2-1+rpi1 libaudit1_1:2.5.2-1+rpi1 libblkid1_2.28-5 libbsd0_0.8.3-1 libbz2-1.0_1.0.6-8 libc-bin_2.22-9 libc-dev-bin_2.22-9 libc6_2.22-9 libc6-dev_2.22-9 libcap2_1:2.25-1 libcap2-bin_1:2.25-1 libcc1-0_6.1.1-1+rpi1 libcomerr2_1.43-3 libcroco3_0.6.11-1 libcryptsetup4_2:1.7.0-2 libdb5.3_5.3.28-11 libdbus-1-3_1.10.8-1 libdebconfclient0_0.213 libdevmapper1.02.1_2:1.02.124-1 libdpkg-perl_1.18.7 libdrm2_2.4.68-1 libfakeroot_1.20.2-2 libfdisk1_2.28-5 libffi6_3.2.1-4 libfile-stripnondeterminism-perl_0.019-1 libfuse2_2.9.6-1 libgc1c2_1:7.4.2-8 libgcc-5-dev_5.3.1-21 libgcc1_1:6.1.1-1+rpi1 libgcrypt20_1.7.0-2 libgdbm3_1.8.3-13.1 libglib2.0-0_2.48.1-1 libgmp10_2:6.1.0+dfsg-2 libgomp1_6.1.1-1+rpi1 libgpg-error0_1.22-2 libicu55_55.1-7 libisl15_0.17.1-1 libjs-jquery_1.12.3-1 libjs-jquery-ui_1.10.1+dfsg-1 libklibc_2.0.4-9+rpi1 libkmod2_22-1.1 liblocale-gettext-perl_1.07-2 liblz4-1_0.0~r131-2 liblzma5_5.1.1alpha+20120614-2.1 libmagic1_1:5.25-2 libmount1_2.28-5 libmpc3_1.0.3-1 libmpfr4_3.1.4-2 libncurses5_6.0+20160319-1 libncursesw5_6.0+20160319-1 libpam-modules_1.1.8-3.3 libpam-modules-bin_1.1.8-3.3 libpam-runtime_1.1.8-3.3 libpam0g_1.1.8-3.3 libpcre3_2:8.38-3.1 libperl5.22_5.22.2-1 libpipeline1_1.4.1-2 libplymouth4_0.9.2-3 libpng12-0_1.2.54-6 libprocps5_2:3.3.11-3 libprotobuf9v5_2.6.1-2 libprotoc9v5_2.6.1-2 libreadline6_6.3-8+b3 libsasl2-2_2.1.26.dfsg1-15 libsasl2-dev_2.1.26.dfsg1-15 libsasl2-modules-db_2.1.26.dfsg1-15 libseccomp-dev_2.3.1-2 libseccomp2_2.3.1-2 libselinux1_2.5-3 libsemanage-common_2.5-1 libsemanage1_2.5-1 libsepol1_2.5-1 libsigsegv2_2.10-5 libsmartcols1_2.28-5 libss2_1.43-3 libssl1.0.2_1.0.2h-1 libstdc++-5-dev_5.3.1-21 libstdc++6_6.1.1-1+rpi1 libsystemd-dev_230-2 libsystemd0_230-2 libtext-charwidth-perl_0.04-7+b6 libtext-iconv-perl_1.7-5+b7 libtext-wrapi18n-perl_0.06-7.1 libtimedate-perl_2.3000-2 libtinfo5_6.0+20160319-1 libtool_2.4.6-0.1 libubsan0_6.1.1-1+rpi1 libudev1_230-2 libunistring0_0.9.6+really0.9.3-0.1 libusb-0.1-4_2:0.1.12-30 libustr-1.0-1_1.0.4-5 libuuid1_2.28-5 libxml2_2.9.3+dfsg1-1.2 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch login_1:4.2-3.1 lsb-base_9.20160110+rpi1 m4_1.4.17-5 make_4.1-9 makedev_2.3.1-93 man-db_2.7.5-1 manpages_4.06-1 mawk_1.3.3-17 mount_2.28-5 multiarch-support_2.22-9 ncurses-base_6.0+20160319-1 ncurses-bin_6.0+20160319-1 netbase_5.3 openssl_1.0.2h-1 passwd_1:4.2-3.1 patch_2.7.5-1 perl_5.22.2-1 perl-base_5.22.2-1 perl-modules-5.22_5.22.2-1 pkg-config_0.29-4 po-debconf_1.0.19 procps_2:3.3.11-3 protobuf-compiler_2.6.1-2 psmisc_22.21-2.1 raspbian-archive-keyring_20120528.2 readline-common_6.3-8 sbuild-build-depends-consul-dummy_0.invalid.0 sbuild-build-depends-core-dummy_0.invalid.0 sed_4.2.2-7.1 sensible-utils_0.0.9 startpar_0.59-3 systemd_230-2 systemd-sysv_230-2 sysv-rc_2.88dsf-59.4 sysvinit-utils_2.88dsf-59.4 tar_1.29-1+rpi1 tzdata_2016d-2 udev_230-2 util-linux_2.28-5 xkb-data_2.17-1 xz-utils_5.1.1alpha+20120614-2.1 zlib1g_1:1.2.8.dfsg-2+b1

+------------------------------------------------------------------------------+
| Build                                                                        |
+------------------------------------------------------------------------------+


Unpack source
-------------

gpgv: keyblock resource `/sbuild-nonexistent/.gnupg/trustedkeys.gpg': file open error
gpgv: Signature made Sat Jun 18 10:53:43 2016 UTC using RSA key ID 53968D1B
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./consul_0.6.4~dfsg-3.dsc
dpkg-source: info: extracting consul in consul-0.6.4~dfsg
dpkg-source: info: unpacking consul_0.6.4~dfsg.orig.tar.xz
dpkg-source: info: unpacking consul_0.6.4~dfsg-3.debian.tar.xz
dpkg-source: info: applying 0001-update-test-fixture-paths.patch
dpkg-source: info: applying nomad-0.3.2.patch

Check disc space
----------------

Sufficient free space for build

User Environment
----------------

DEB_BUILD_OPTIONS=parallel=4
HOME=/sbuild-nonexistent
LOGNAME=buildd
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
SCHROOT_ALIAS_NAME=stretch-staging-armhf-sbuild
SCHROOT_CHROOT_NAME=stretch-staging-armhf-sbuild
SCHROOT_COMMAND=env
SCHROOT_GID=109
SCHROOT_GROUP=buildd
SCHROOT_SESSION_ID=stretch-staging-armhf-sbuild-c2677bfe-326a-403f-9d38-57e9361bb9bb
SCHROOT_UID=104
SCHROOT_USER=buildd
SHELL=/bin/sh
TERM=linux
USER=buildd

dpkg-buildpackage
-----------------

dpkg-buildpackage: info: source package consul
dpkg-buildpackage: info: source version 0.6.4~dfsg-3
dpkg-buildpackage: info: source distribution unstable
 dpkg-source --before-build consul-0.6.4~dfsg
dpkg-buildpackage: info: host architecture armhf
 fakeroot debian/rules clean
dh clean --buildsystem=golang --with=golang
   dh_testdir -O--buildsystem=golang
   dh_auto_clean -O--buildsystem=golang
   dh_clean -O--buildsystem=golang
 debian/rules build-arch
dh build-arch --buildsystem=golang --with=golang
   dh_testdir -a -O--buildsystem=golang
   dh_update_autotools_config -a -O--buildsystem=golang
   dh_auto_configure -a -O--buildsystem=golang
   dh_auto_build -a -O--buildsystem=golang
	go install -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/prepared_query github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/lib github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise
github.com/armon/go-radix
github.com/hashicorp/serf/coordinate
github.com/hashicorp/go-cleanhttp
github.com/armon/circbuf
github.com/armon/go-metrics
github.com/DataDog/datadog-go/statsd
github.com/elazarl/go-bindata-assetfs
github.com/hashicorp/consul/api
github.com/armon/go-metrics/datadog
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts
github.com/Sirupsen/logrus
github.com/docker/go-units
golang.org/x/net/context
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools
github.com/opencontainers/runc/libcontainer/user
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive
github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp
github.com/hashicorp/golang-lru/simplelru
github.com/hashicorp/golang-lru
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir
github.com/hashicorp/hcl/hcl/strconv
github.com/hashicorp/go-msgpack/codec
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/hil/ast
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/json/token
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/json/scanner
github.com/mitchellh/reflectwalk
github.com/hashicorp/hcl/json/parser
github.com/hashicorp/hil
github.com/fsouza/go-dockerclient
github.com/hashicorp/hcl
github.com/hashicorp/consul/acl
github.com/mitchellh/copystructure
github.com/hashicorp/go-immutable-radix
github.com/hashicorp/consul/lib
github.com/hashicorp/consul/consul/structs
github.com/hashicorp/consul/tlsutil
github.com/hashicorp/go-memdb
github.com/hashicorp/go-uuid
github.com/hashicorp/errwrap
github.com/hashicorp/go-multierror
github.com/hashicorp/consul/consul/prepared_query
github.com/miekg/dns
github.com/hashicorp/net-rpc-msgpackrpc
github.com/hashicorp/consul/consul/state
github.com/hashicorp/raft
github.com/boltdb/bolt
github.com/hashicorp/yamux
github.com/inconshreveable/muxado/proto/buffer
github.com/inconshreveable/muxado/proto/frame
github.com/hashicorp/consul/watch
github.com/hashicorp/raft-boltdb
github.com/hashicorp/go-checkpoint
golang.org/x/sys/unix
github.com/inconshreveable/muxado/proto
github.com/hashicorp/go-syslog
github.com/inconshreveable/muxado/proto/ext
github.com/hashicorp/logutils
github.com/inconshreveable/muxado
github.com/hashicorp/scada-client
github.com/hashicorp/go-reap
github.com/bgentry/speakeasy
github.com/mattn/go-isatty
github.com/mitchellh/mapstructure
github.com/mitchellh/cli
github.com/ryanuber/columnize
github.com/hashicorp/consul/testutil
github.com/hashicorp/memberlist
github.com/hashicorp/serf/serf
github.com/hashicorp/consul/consul
github.com/hashicorp/consul/command/agent
github.com/hashicorp/consul/command
github.com/hashicorp/consul
   debian/rules override_dh_auto_test
make[1]: Entering directory '/<<PKGBUILDDIR>>'
## TODO patch out tests that rely on network via -test.short or
## something (which doesn't appear to be used anywhere ATM, so might
## be amenable as a PR to upstream)
dh_auto_test
	go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/prepared_query github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/lib github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul	0.115s
=== RUN   TestRootACL
--- PASS: TestRootACL (0.00s)
=== RUN   TestStaticACL
--- PASS: TestStaticACL (0.00s)
=== RUN   TestPolicyACL
--- PASS: TestPolicyACL (0.00s)
=== RUN   TestPolicyACL_Parent
--- PASS: TestPolicyACL_Parent (0.00s)
=== RUN   TestPolicyACL_Keyring
--- PASS: TestPolicyACL_Keyring (0.00s)
=== RUN   TestCache_GetPolicy
--- PASS: TestCache_GetPolicy (0.00s)
=== RUN   TestCache_GetACL
--- PASS: TestCache_GetACL (0.00s)
=== RUN   TestCache_ClearACL
--- PASS: TestCache_ClearACL (0.00s)
=== RUN   TestCache_Purge
--- PASS: TestCache_Purge (0.00s)
=== RUN   TestCache_GetACLPolicy
--- PASS: TestCache_GetACLPolicy (0.00s)
=== RUN   TestCache_GetACL_Parent
--- PASS: TestCache_GetACL_Parent (0.00s)
=== RUN   TestCache_GetACL_ParentCache
--- PASS: TestCache_GetACL_ParentCache (0.00s)
=== RUN   TestACLPolicy_Parse_HCL
--- PASS: TestACLPolicy_Parse_HCL (0.00s)
=== RUN   TestACLPolicy_Parse_JSON
--- PASS: TestACLPolicy_Parse_JSON (0.00s)
=== RUN   TestACLPolicy_Keyring_Empty
--- PASS: TestACLPolicy_Keyring_Empty (0.00s)
=== RUN   TestACLPolicy_Bad_Policy
--- PASS: TestACLPolicy_Bad_Policy (0.00s)
PASS
ok  	github.com/hashicorp/consul/acl	0.104s
=== RUN   TestACL_CreateDestroy
=== RUN   TestACL_CloneDestroy
=== RUN   TestACL_Info
=== RUN   TestACL_List
=== RUN   TestAgent_Self
=== RUN   TestAgent_Members
=== RUN   TestAgent_Services
=== RUN   TestAgent_Services_CheckPassing
=== RUN   TestAgent_Services_CheckBadStatus
=== RUN   TestAgent_ServiceAddress
=== RUN   TestAgent_EnableTagOverride
=== RUN   TestAgent_Services_MultipleChecks
=== RUN   TestAgent_SetTTLStatus
=== RUN   TestAgent_Checks
=== RUN   TestAgent_CheckStartPassing
=== RUN   TestAgent_Checks_serviceBound
=== RUN   TestAgent_Checks_Docker
=== RUN   TestAgent_Join
=== RUN   TestAgent_ForceLeave
=== RUN   TestServiceMaintenance
=== RUN   TestNodeMaintenance
=== RUN   TestDefaultConfig_env
=== RUN   TestSetQueryOptions
=== RUN   TestSetWriteOptions
=== RUN   TestRequestToHTTP
=== RUN   TestParseQueryMeta
=== RUN   TestAPI_UnixSocket
=== RUN   TestAPI_durToMsec
--- PASS: TestAPI_durToMsec (0.00s)
=== RUN   TestAPI_IsServerError
--- PASS: TestAPI_IsServerError (0.00s)
=== RUN   TestCatalog_Datacenters
=== RUN   TestCatalog_Nodes
=== RUN   TestCatalog_Services
=== RUN   TestCatalog_Service
=== RUN   TestCatalog_Node
=== RUN   TestCatalog_Registration
=== RUN   TestCatalog_EnableTagOverride
=== RUN   TestCoordinate_Datacenters
=== RUN   TestCoordinate_Nodes
=== RUN   TestEvent_FireList
=== RUN   TestHealth_Node
=== RUN   TestHealth_Checks
=== RUN   TestHealth_Service
=== RUN   TestHealth_State
=== RUN   TestClientPutGetDelete
=== RUN   TestClient_List_DeleteRecurse
=== RUN   TestClient_DeleteCAS
=== RUN   TestClient_CAS
=== RUN   TestClient_WatchGet
=== RUN   TestClient_WatchList
=== RUN   TestClient_Keys_DeleteRecurse
=== RUN   TestClient_AcquireRelease
=== RUN   TestLock_LockUnlock
=== RUN   TestLock_ForceInvalidate
=== RUN   TestLock_DeleteKey
=== RUN   TestLock_Contend
=== RUN   TestLock_Destroy
=== RUN   TestLock_Conflict
=== RUN   TestLock_ReclaimLock
=== RUN   TestLock_MonitorRetry
=== RUN   TestLock_OneShot
=== RUN   TestPreparedQuery
=== RUN   TestSemaphore_AcquireRelease
=== RUN   TestSemaphore_ForceInvalidate
=== RUN   TestSemaphore_DeleteKey
=== RUN   TestSemaphore_Contend
=== RUN   TestSemaphore_BadLimit
=== RUN   TestSemaphore_Destroy
=== RUN   TestSemaphore_Conflict
=== RUN   TestSemaphore_MonitorRetry
=== RUN   TestSemaphore_OneShot
=== RUN   TestSession_CreateDestroy
=== RUN   TestSession_CreateRenewDestroy
=== RUN   TestSession_CreateRenewDestroyRenew
=== RUN   TestSession_CreateDestroyRenewPeriodic
=== RUN   TestSession_Info
=== RUN   TestSession_Node
=== RUN   TestSession_List
=== RUN   TestStatusLeader
=== RUN   TestStatusPeers
--- SKIP: TestACL_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Self (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Members (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_CheckBadStatus (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ServiceAddress (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_EnableTagOverride (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Services_MultipleChecks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_SetTTLStatus (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_CheckStartPassing (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks_serviceBound (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CloneDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestACL_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestServiceMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestNodeMaintenance (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Join (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_ForceLeave (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestAgent_Checks_Docker (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- PASS: TestDefaultConfig_env (0.00s)
--- PASS: TestParseQueryMeta (0.00s)
--- SKIP: TestAPI_UnixSocket (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSetWriteOptions (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Services (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestRequestToHTTP (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Registration (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Datacenters (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestEvent_FireList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Checks (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCoordinate_Nodes (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_Service (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestCatalog_EnableTagOverride (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_List_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_DeleteCAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_CAS (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSetQueryOptions (0.01s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchList (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_Keys_DeleteRecurse (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestHealth_State (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_WatchGet (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClient_AcquireRelease (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_LockUnlock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_ReclaimLock (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_OneShot (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestPreparedQuery (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestLock_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_AcquireRelease (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_ForceInvalidate (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_DeleteKey (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Contend (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_BadLimit (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestClientPutGetDelete (0.02s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Destroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_OneShot (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroy (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateRenewDestroyRenew (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_CreateDestroyRenewPeriodic (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Info (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_Node (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSession_List (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusLeader (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestStatusPeers (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_MonitorRetry (0.00s)
	server.go:143: consul not found on $PATH, skipping
--- SKIP: TestSemaphore_Conflict (0.00s)
	server.go:143: consul not found on $PATH, skipping
PASS
ok  	github.com/hashicorp/consul/api	0.220s
=== RUN   TestConfigTestCommand_implements
--- PASS: TestConfigTestCommand_implements (0.00s)
=== RUN   TestConfigTestCommandFailOnEmptyFile
--- PASS: TestConfigTestCommandFailOnEmptyFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnEmptyDir
--- PASS: TestConfigTestCommandSucceedOnEmptyDir (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigFile
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigFile (0.00s)
=== RUN   TestConfigTestCommandSucceedOnMinimalConfigDir
--- PASS: TestConfigTestCommandSucceedOnMinimalConfigDir (0.00s)
=== RUN   TestConfigTestCommandSucceedOnConfigDirWithEmptyFile
--- PASS: TestConfigTestCommandSucceedOnConfigDirWithEmptyFile (0.00s)
=== RUN   TestEventCommand_implements
--- PASS: TestEventCommand_implements (0.00s)
=== RUN   TestEventCommandRun
2016/06/23 07:43:20 [DEBUG] http: Request GET /v1/agent/self (3.70978ms) from=127.0.0.1:52044
2016/06/23 07:43:20 [DEBUG] http: Request PUT /v1/event/fire/cmd (1.067366ms) from=127.0.0.1:52044
2016/06/23 07:43:21 [DEBUG] http: Shutting down http server (127.0.0.1:10411)
--- PASS: TestEventCommandRun (0.91s)
=== RUN   TestExecCommand_implements
--- PASS: TestExecCommand_implements (0.00s)
=== RUN   TestExecCommandRun
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (358.344µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (373.678µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (285.009µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (278.009µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (318.343µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (285.008µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (293.009µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (320.01µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (303.676µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (304.675µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (419.013µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (292.676µs) from=127.0.0.1:57942
2016/06/23 07:43:21 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:21 [DEBUG] http: Request GET /v1/catalog/nodes (306.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (292.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (297.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (318.677µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (294.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (294.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (308.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (431.68µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (299.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (305.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (345.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (2.467409ms) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (312.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (300.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (311.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (384.345µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (337.677µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (298.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (305.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (964.363µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (334.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (834.359µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (678.021µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (2.426408ms) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (304.676µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (325.344µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (342.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (278.008µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (274.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (287.675µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (252.008µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (295.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (318.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (294.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (273.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (280.675µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (403.345µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (357.677µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (309.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (355.677µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (287.008µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (290.343µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (284.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (331.01µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (296.675µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (295.342µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (309.009µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/catalog/nodes (395.345µs) from=127.0.0.1:57942
2016/06/23 07:43:22 [DEBUG] http: Request GET /v1/agent/self (995.03µs) from=127.0.0.1:57944
2016/06/23 07:43:22 [DEBUG] http: Request PUT /v1/session/create (124.233802ms) from=127.0.0.1:57944
2016/06/23 07:43:22 [DEBUG] http: Request PUT /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/job?acquire=f501c308-5df5-1b18-4e90-a1233e12cdbe (131.402355ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request PUT /v1/event/fire/_rexec (1.87139ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/?keys=&wait=400ms (1.180703ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/?index=5&keys=&wait=400ms (169.8962ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/?index=6&keys=&wait=400ms (261.079323ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/Node%202/out/00000 (604.352µs) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/?index=7&keys=&wait=400ms (124.674816ms) from=127.0.0.1:57944
2016/06/23 07:43:23 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/Node%202/exit (331.677µs) from=127.0.0.1:57944
2016/06/23 07:43:24 [DEBUG] http: Request GET /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe/?index=8&keys=&wait=400ms (415.354045ms) from=127.0.0.1:57944
2016/06/23 07:43:24 [DEBUG] http: Request PUT /v1/session/destroy/f501c308-5df5-1b18-4e90-a1233e12cdbe (131.11468ms) from=127.0.0.1:57946
2016/06/23 07:43:24 [DEBUG] http: Request DELETE /v1/kv/_rexec/f501c308-5df5-1b18-4e90-a1233e12cdbe?recurse= (136.380507ms) from=127.0.0.1:57944
2016/06/23 07:43:24 [DEBUG] http: Request PUT /v1/session/destroy/f501c308-5df5-1b18-4e90-a1233e12cdbe (129.893309ms) from=127.0.0.1:57944
2016/06/23 07:43:24 [DEBUG] http: Shutting down http server (127.0.0.1:10421)
--- PASS: TestExecCommandRun (3.19s)
=== RUN   TestExecCommandRun_CrossDC
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (366.011µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (268.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (286.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (271.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (257.008µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (267.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (277.008µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (275.341µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (270.008µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (268.341µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (265.341µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (259.674µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (258.008µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (273.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (267.675µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (273.675µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (273.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (286.008µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (268.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (244.674µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (407.346µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (287.009µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (369.344µs) from=127.0.0.1:42824
2016/06/23 07:43:25 [DEBUG] http: Request GET /v1/catalog/nodes (284.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (286.009µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (249.341µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (271.341µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (275.342µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (272.009µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (289.343µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (280.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (280.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (332.343µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (261.341µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (283.676µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (289.008µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (263.008µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (287.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (291.676µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (281.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (272.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (312.677µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (280.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (331.01µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (262.674µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (274.342µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (272.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (287.342µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (316.009µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (288.675µs) from=127.0.0.1:42824
2016/06/23 07:43:26 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (345.011µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (296.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (292.342µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (340.678µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (295.343µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (269.008µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (342.344µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (322.01µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (298.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (260.008µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (325.344µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (307.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (288.675µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (302.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (278.675µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (273.675µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (284.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (315.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (312.009µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (287.009µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (379.012µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (305.009µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (297.01µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (318.677µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (293.675µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (310.343µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (363.011µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (364.011µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (303.676µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (262.674µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (331.343µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (281.009µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (294.343µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/catalog/nodes (301.01µs) from=127.0.0.1:35586
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/agent/self?dc=dc2 (782.358µs) from=127.0.0.1:42828
2016/06/23 07:43:26 [DEBUG] http: Request GET /v1/health/service/consul?dc=dc2&passing=1 (5.154825ms) from=127.0.0.1:42828
2016/06/23 07:43:26 [DEBUG] http: Request PUT /v1/session/create?dc=dc2 (131.756699ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request PUT /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/job?acquire=6b2ee362-6be7-6108-dc8e-1dfdfb445fa6&dc=dc2 (131.633028ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request PUT /v1/event/fire/_rexec?dc=dc2 (5.310496ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/?dc=dc2&keys=&wait=400ms (3.514107ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/?dc=dc2&index=5&keys=&wait=400ms (177.036085ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/?dc=dc2&index=6&keys=&wait=400ms (196.58235ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/Node%204/out/00000?dc=dc2 (5.967183ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/?dc=dc2&index=7&keys=&wait=400ms (109.474684ms) from=127.0.0.1:42828
2016/06/23 07:43:27 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/Node%204/exit?dc=dc2 (3.2761ms) from=127.0.0.1:42828
2016/06/23 07:43:28 [DEBUG] http: Request GET /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6/?dc=dc2&index=8&keys=&wait=400ms (432.536237ms) from=127.0.0.1:42828
2016/06/23 07:43:28 [DEBUG] http: Request PUT /v1/session/destroy/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6?dc=dc2 (136.065164ms) from=127.0.0.1:42832
2016/06/23 07:43:28 [DEBUG] http: Request DELETE /v1/kv/_rexec/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6?dc=dc2&recurse= (140.381296ms) from=127.0.0.1:42828
2016/06/23 07:43:28 [DEBUG] http: Request PUT /v1/session/destroy/6b2ee362-6be7-6108-dc8e-1dfdfb445fa6?dc=dc2 (132.255381ms) from=127.0.0.1:42828
2016/06/23 07:43:28 [DEBUG] http: Shutting down http server (127.0.0.1:10441)
2016/06/23 07:43:28 [DEBUG] http: Shutting down http server (127.0.0.1:10431)
--- PASS: TestExecCommandRun_CrossDC (4.31s)
=== RUN   TestExecCommand_Validate
--- PASS: TestExecCommand_Validate (0.00s)
=== RUN   TestExecCommand_Sessions
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (364.011µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (329.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (398.679µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (312.343µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (363.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (338.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (298.343µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (323.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (323.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (296.342µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (330.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (365.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (337.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (318.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (360.345µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (296.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (355.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (419.679µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (373.678µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (373.011µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (309.343µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (299.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (414.679µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (317.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (351.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (363.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (322.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (326.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (321.343µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (361.678µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (314.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (345.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (290.675µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (301.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (354.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (277.008µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (271.009µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (307.343µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (334.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (298.342µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (293.676µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (308.009µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (378.011µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (338.344µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (317.01µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (323.677µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (306.009µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:56864
2016/06/23 07:43:29 [DEBUG] http: Request GET /v1/catalog/nodes (286.009µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (319.677µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (293.342µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (330.677µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (288.675µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (306.343µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (356.678µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (2.40374ms) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (258.675µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (304.676µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (360.344µs) from=127.0.0.1:56864
2016/06/23 07:43:30 [DEBUG] http: Request PUT /v1/session/create (116.514899ms) from=127.0.0.1:56866
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/session/info/7258badc-933b-711a-5951-4e14e6b7f1e3 (729.023µs) from=127.0.0.1:56866
2016/06/23 07:43:30 [DEBUG] http: Request PUT /v1/session/destroy/7258badc-933b-711a-5951-4e14e6b7f1e3 (116.401562ms) from=127.0.0.1:56866
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/session/info/7258badc-933b-711a-5951-4e14e6b7f1e3 (308.009µs) from=127.0.0.1:56868
2016/06/23 07:43:30 [DEBUG] http: Shutting down http server (127.0.0.1:10451)
--- PASS: TestExecCommand_Sessions (1.59s)
=== RUN   TestExecCommand_Sessions_Foreign
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (329.343µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (333.343µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (297.676µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (306.342µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (353.677µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (328.677µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (380.345µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (310.009µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (372.345µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (342.677µs) from=127.0.0.1:35206
2016/06/23 07:43:30 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:30 [DEBUG] http: Request GET /v1/catalog/nodes (360.011µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (344.01µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (426.013µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (412.679µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (327.676µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (349.011µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (314.01µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (334.01µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (320.676µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (329.677µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (322.343µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (331.01µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (341.344µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (371.345µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (315.676µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (325.343µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (1.052366ms) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (384.678µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (291.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (346.678µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (303.343µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (327.01µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (326.677µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (293.676µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (330.677µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (313.343µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (368.012µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (359.344µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (268.675µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (259.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (273.675µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (288.675µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (283.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (328.344µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (272.341µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (311.676µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (351.344µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (312.009µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (279.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (2.491742ms) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (349.011µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (364.678µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (294.342µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (2.363072ms) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (332.344µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/catalog/nodes (420.68µs) from=127.0.0.1:35206
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/health/service/consul?passing=1 (956.029µs) from=127.0.0.1:35208
2016/06/23 07:43:31 [DEBUG] http: Request PUT /v1/session/create (145.405783ms) from=127.0.0.1:35208
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/session/info/7bbbaed7-b888-8a0a-0f7b-e3a55938da76 (758.023µs) from=127.0.0.1:35208
2016/06/23 07:43:31 [DEBUG] http: Request PUT /v1/session/destroy/7bbbaed7-b888-8a0a-0f7b-e3a55938da76 (120.643359ms) from=127.0.0.1:35208
2016/06/23 07:43:31 [DEBUG] http: Request GET /v1/session/info/7bbbaed7-b888-8a0a-0f7b-e3a55938da76 (340.677µs) from=127.0.0.1:35210
2016/06/23 07:43:32 [DEBUG] http: Shutting down http server (127.0.0.1:10461)
--- PASS: TestExecCommand_Sessions_Foreign (1.53s)
=== RUN   TestExecCommand_UploadDestroy
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (374.678µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (317.343µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (299.342µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (273.008µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (312.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (290.342µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (336.677µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (314.343µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (313.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (296.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (309.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (321.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (325.344µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (325.677µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (308.343µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (370.344µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (320.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (319.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (277.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (315.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (310.343µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (300.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (310.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (280.008µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (309.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (300.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (308.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (265.674µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (281.675µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (273.341µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (311.342µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (283.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (281.675µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (274.009µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (313.01µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (301.676µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (314.343µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (270.675µs) from=127.0.0.1:37948
2016/06/23 07:43:32 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (343.011µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (280.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (310.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (285.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (263.008µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (311.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (301.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (271.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (262.008µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (286.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (261.341µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (281.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (278.008µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (299.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (287.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (268.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (263.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (284.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (286.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (259.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (268.008µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (340.678µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (268.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (265.341µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (297.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (283.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (330.677µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (313.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (309.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (316.01µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (301.675µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (343.344µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (310.009µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (324.01µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (326.01µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (370.012µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (322.344µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (381.345µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (307.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (332.677µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (297.342µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (323.343µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (365.345µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (364.678µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request GET /v1/catalog/nodes (363.011µs) from=127.0.0.1:37948
2016/06/23 07:43:33 [DEBUG] http: Request PUT /v1/session/create (348.715339ms) from=127.0.0.1:37950
2016/06/23 07:43:34 [DEBUG] http: Request PUT /v1/kv/_rexec/9b239d80-2daf-328e-cd3d-ebb9213984e9/job?acquire=9b239d80-2daf-328e-cd3d-ebb9213984e9 (230.484721ms) from=127.0.0.1:37950
2016/06/23 07:43:34 [DEBUG] http: Request GET /v1/kv/_rexec/9b239d80-2daf-328e-cd3d-ebb9213984e9/job (387.346µs) from=127.0.0.1:37950
2016/06/23 07:43:34 [DEBUG] http: Request DELETE /v1/kv/_rexec/9b239d80-2daf-328e-cd3d-ebb9213984e9?recurse= (307.948091ms) from=127.0.0.1:37950
2016/06/23 07:43:34 [DEBUG] http: Request GET /v1/kv/_rexec/9b239d80-2daf-328e-cd3d-ebb9213984e9/job (245.674µs) from=127.0.0.1:37950
2016/06/23 07:43:34 [DEBUG] http: Shutting down http server (127.0.0.1:10471)
--- PASS: TestExecCommand_UploadDestroy (2.53s)
=== RUN   TestExecCommand_StreamResults
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (412.346µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (330.011µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (354.678µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (358.344µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (294.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (395.346µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (431.68µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (325.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (316.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (332.011µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (325.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (282.675µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (497.015µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (323.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (350.011µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (301.343µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (329.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (312.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (332.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (314.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (300.343µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (333.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (370.678µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (440.347µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (1.142035ms) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (323.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (294.343µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (358.011µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (295.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (274.341µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (291.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (311.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (299.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (351.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (381.012µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (401.012µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (489.348µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (260.675µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (329.344µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (272.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (302.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (302.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (296.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (341.344µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (333.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (330.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (343.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (311.343µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (315.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (321.344µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (498.015µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (313.343µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (321.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (318.677µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (314.01µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (300.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (373.011µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (310.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (276.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (297.676µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (286.009µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (296.675µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (272.675µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (289.342µs) from=127.0.0.1:35740
2016/06/23 07:43:35 [DEBUG] http: Request GET /v1/catalog/nodes (412.012µs) from=127.0.0.1:35740
2016/06/23 07:43:37 [DEBUG] http: Request PUT /v1/session/create (112.485442ms) from=127.0.0.1:35742
2016/06/23 07:43:37 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?keys= (2.976091ms) from=127.0.0.1:35746
2016/06/23 07:43:37 [DEBUG] http: Request PUT /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/ack?acquire=4393b267-3833-fe34-05e1-9408e8511445 (108.078974ms) from=127.0.0.1:35744
2016/06/23 07:43:37 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?index=1&keys= (101.076094ms) from=127.0.0.1:35746
2016/06/23 07:43:38 [DEBUG] http: Request PUT /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/exit?acquire=4393b267-3833-fe34-05e1-9408e8511445 (152.347663ms) from=127.0.0.1:35748
2016/06/23 07:43:38 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?index=5&keys= (1.09801027s) from=127.0.0.1:35746
2016/06/23 07:43:38 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/exit (364.011µs) from=127.0.0.1:35748
2016/06/23 07:43:38 [DEBUG] http: Request PUT /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/random?acquire=4393b267-3833-fe34-05e1-9408e8511445 (139.240261ms) from=127.0.0.1:35750
2016/06/23 07:43:38 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?index=6&keys= (140.557635ms) from=127.0.0.1:35748
2016/06/23 07:43:38 [DEBUG] http: Request PUT /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/out/00000?acquire=4393b267-3833-fe34-05e1-9408e8511445 (130.791669ms) from=127.0.0.1:35750
2016/06/23 07:43:38 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?index=7&keys= (131.337353ms) from=127.0.0.1:35748
2016/06/23 07:43:38 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/out/00000 (391.012µs) from=127.0.0.1:35750
2016/06/23 07:43:39 [ERR] http: Request PUT /v1/session/renew/9b239d80-2daf-328e-cd3d-ebb9213984e9, error: No cluster leader from=127.0.0.1:37950
2016/06/23 07:43:39 [DEBUG] http: Request PUT /v1/session/renew/9b239d80-2daf-328e-cd3d-ebb9213984e9 (280.675µs) from=127.0.0.1:37950
2016/06/23 07:43:42 [DEBUG] http: Request PUT /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/out/00001?acquire=4393b267-3833-fe34-05e1-9408e8511445 (126.702877ms) from=127.0.0.1:35752
2016/06/23 07:43:42 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/?index=8&keys= (3.328326195s) from=127.0.0.1:35750
2016/06/23 07:43:42 [DEBUG] http: Request GET /v1/kv/_rexec/4393b267-3833-fe34-05e1-9408e8511445/foo/out/00001 (356.678µs) from=127.0.0.1:35752
2016/06/23 07:43:42 [DEBUG] http: Shutting down http server (127.0.0.1:10481)
--- PASS: TestExecCommand_StreamResults (7.73s)
=== RUN   TestForceLeaveCommand_implements
--- PASS: TestForceLeaveCommand_implements (0.00s)
=== RUN   TestForceLeaveCommandRun
2016/06/23 07:43:43 [ERR] http: Request PUT /v1/session/renew/4393b267-3833-fe34-05e1-9408e8511445, error: No cluster leader from=127.0.0.1:35754
2016/06/23 07:43:43 [DEBUG] http: Request PUT /v1/session/renew/4393b267-3833-fe34-05e1-9408e8511445 (268.341µs) from=127.0.0.1:35754
2016/06/23 07:43:43 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/06/23 07:43:43 [INFO] agent.rpc: Accepted client: 127.0.0.1:36816
2016/06/23 07:43:44 [DEBUG] http: Shutting down http server (127.0.0.1:10501)
2016/06/23 07:43:44 [DEBUG] http: Shutting down http server (127.0.0.1:10491)
--- PASS: TestForceLeaveCommandRun (2.08s)
=== RUN   TestForceLeaveCommandRun_noAddrs
--- PASS: TestForceLeaveCommandRun_noAddrs (0.00s)
=== RUN   TestInfoCommand_implements
--- PASS: TestInfoCommand_implements (0.00s)
=== RUN   TestInfoCommandRun
2016/06/23 07:43:44 [INFO] agent.rpc: Accepted client: 127.0.0.1:53090
2016/06/23 07:43:45 [DEBUG] http: Shutting down http server (127.0.0.1:10511)
--- PASS: TestInfoCommandRun (0.85s)
=== RUN   TestJoinCommand_implements
--- PASS: TestJoinCommand_implements (0.00s)
=== RUN   TestJoinCommandRun
2016/06/23 07:43:46 [INFO] agent.rpc: Accepted client: 127.0.0.1:36096
2016/06/23 07:43:46 [DEBUG] http: Shutting down http server (127.0.0.1:10531)
2016/06/23 07:43:47 [DEBUG] http: Shutting down http server (127.0.0.1:10521)
--- PASS: TestJoinCommandRun (1.96s)
=== RUN   TestJoinCommandRun_wan
2016/06/23 07:43:48 [INFO] agent.rpc: Accepted client: 127.0.0.1:48888
2016/06/23 07:43:48 [DEBUG] http: Shutting down http server (127.0.0.1:10551)
2016/06/23 07:43:48 [DEBUG] http: Shutting down http server (127.0.0.1:10541)
--- PASS: TestJoinCommandRun_wan (1.48s)
=== RUN   TestJoinCommandRun_noAddrs
--- PASS: TestJoinCommandRun_noAddrs (0.00s)
=== RUN   TestKeygenCommand_implements
--- PASS: TestKeygenCommand_implements (0.00s)
=== RUN   TestKeygenCommand
--- PASS: TestKeygenCommand (0.00s)
=== RUN   TestKeyringCommand_implements
--- PASS: TestKeyringCommand_implements (0.00s)
=== RUN   TestKeyringCommandRun
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43384
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43388
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43390
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43392
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43394
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43396
2016/06/23 07:43:49 [DEBUG] http: Shutting down http server (127.0.0.1:10561)
--- PASS: TestKeyringCommandRun (0.90s)
=== RUN   TestKeyringCommandRun_help
--- PASS: TestKeyringCommandRun_help (0.00s)
=== RUN   TestKeyringCommandRun_failedConnection
--- PASS: TestKeyringCommandRun_failedConnection (0.00s)
=== RUN   TestLeaveCommand_implements
--- PASS: TestLeaveCommand_implements (0.00s)
=== RUN   TestLeaveCommandRun
2016/06/23 07:43:49 [INFO] agent.rpc: Accepted client: 127.0.0.1:43538
2016/06/23 07:43:49 [INFO] agent.rpc: Graceful leave triggered
2016/06/23 07:43:50 [DEBUG] http: Shutting down http server (127.0.0.1:10571)
--- PASS: TestLeaveCommandRun (0.81s)
=== RUN   TestLockCommand_implements
--- PASS: TestLockCommand_implements (0.00s)
=== RUN   TestLockCommand_BadArgs
--- PASS: TestLockCommand_BadArgs (0.00s)
=== RUN   TestLockCommand_Run
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (394.346µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (326.677µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (290.342µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (283.009µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (286.009µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (291.676µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (282.008µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (335.344µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (278.009µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (328.01µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (319.343µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (289.342µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (296.676µs) from=127.0.0.1:36266
2016/06/23 07:43:50 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:50 [DEBUG] http: Request GET /v1/catalog/nodes (309.009µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (313.343µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (308.01µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (291.342µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (301.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (327.677µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (357.678µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (329.344µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (308.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (376.678µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (403.346µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (302.343µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (302.343µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (355.344µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (318.343µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (295.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (281.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (300.009µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (306.009µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (300.342µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (282.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (269.341µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (326.344µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (306.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (662.021µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (298.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (266.341µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (275.008µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (274.342µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (2.742084ms) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (312.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (455.68µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (565.684µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (463.347µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (250.341µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (263.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (292.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (262.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (254.008µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (269.008µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (254.675µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (270.342µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (278.342µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/catalog/nodes (307.676µs) from=127.0.0.1:36266
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/agent/self (746.023µs) from=127.0.0.1:36268
2016/06/23 07:43:51 [DEBUG] http: Request PUT /v1/session/create (117.566931ms) from=127.0.0.1:36268
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (316.01µs) from=127.0.0.1:36268
2016/06/23 07:43:51 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=88c555c4-7fbd-ea6b-cf3f-a8c8766fd219&flags=3304740253564472344 (118.307287ms) from=127.0.0.1:36268
2016/06/23 07:43:51 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (915.361µs) from=127.0.0.1:36268
2016/06/23 07:43:52 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=88c555c4-7fbd-ea6b-cf3f-a8c8766fd219 (157.088808ms) from=127.0.0.1:36270
2016/06/23 07:43:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (213.613205ms) from=127.0.0.1:36268
2016/06/23 07:43:52 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (377.012µs) from=127.0.0.1:36268
2016/06/23 07:43:52 [DEBUG] http: Request PUT /v1/session/destroy/88c555c4-7fbd-ea6b-cf3f-a8c8766fd219 (552.339904ms) from=127.0.0.1:36272
2016/06/23 07:43:52 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (901.8306ms) from=127.0.0.1:36268
2016/06/23 07:43:53 [DEBUG] http: Shutting down http server (127.0.0.1:10581)
--- PASS: TestLockCommand_Run (2.78s)
=== RUN   TestLockCommand_Try_Lock
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (298.676µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (282.009µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (295.009µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (327.677µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (343.343µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (312.676µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (291.675µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (298.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (4.823814ms) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (344.01µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (309.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (299.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (282.009µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (297.342µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (318.01µs) from=127.0.0.1:40642
2016/06/23 07:43:53 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:53 [DEBUG] http: Request GET /v1/catalog/nodes (312.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (332.01µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (327.01µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (308.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (292.675µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (295.009µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (304.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (293.009µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (463.014µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (636.353µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (717.689µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (291.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (289.675µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (297.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (377.011µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (292.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (265.008µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (344.677µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (358.011µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (341.01µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (326.01µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (326.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (268.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (264.341µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (269.009µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (261.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (268.008µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (264.008µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (262.675µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (289.009µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (281.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (259.674µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (350.677µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (318.343µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (277.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (272.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (410.679µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (326.01µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (362.678µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (281.008µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (283.342µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (305.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (628.353µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (522.35µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (436.013µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/catalog/nodes (274.676µs) from=127.0.0.1:40642
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/agent/self (969.696µs) from=127.0.0.1:40644
2016/06/23 07:43:54 [DEBUG] http: Request PUT /v1/session/create (119.065977ms) from=127.0.0.1:40644
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=10000ms (277.675µs) from=127.0.0.1:40644
2016/06/23 07:43:54 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=d354a586-1a46-9bf0-a0c9-dd9f08550e63&flags=3304740253564472344 (118.606297ms) from=127.0.0.1:40644
2016/06/23 07:43:54 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (3.05976ms) from=127.0.0.1:40644
2016/06/23 07:43:55 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=d354a586-1a46-9bf0-a0c9-dd9f08550e63 (177.133754ms) from=127.0.0.1:40646
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (218.914366ms) from=127.0.0.1:40644
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (323.01µs) from=127.0.0.1:40646
2016/06/23 07:43:55 [DEBUG] http: Request PUT /v1/session/destroy/d354a586-1a46-9bf0-a0c9-dd9f08550e63 (121.700725ms) from=127.0.0.1:40644
2016/06/23 07:43:55 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (254.029774ms) from=127.0.0.1:40646
2016/06/23 07:43:55 [DEBUG] http: Shutting down http server (127.0.0.1:10591)
--- PASS: TestLockCommand_Try_Lock (2.28s)
=== RUN   TestLockCommand_Try_Semaphore
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (309.01µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (337.677µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (358.011µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (350.011µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (315.343µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (397.678µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (423.012µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (297.01µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (284.676µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (413.346µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (405.013µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (544.017µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (286.675µs) from=127.0.0.1:36948
2016/06/23 07:43:55 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:55 [DEBUG] http: Request GET /v1/catalog/nodes (349.01µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (301.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (292.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (306.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (573.684µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (468.681µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (305.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (277.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (286.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (278.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (331.343µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (302.343µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (307.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (288.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (466.681µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (564.018µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (941.695µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (1.030698ms) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (604.352µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (656.353µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (284.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (314.343µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (393.346µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (339.677µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (306.009µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (303.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (276.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (282.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (255.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (302.343µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (382.678µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (268.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (287.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (267.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (292.009µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (273.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (288.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (457.681µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (280.008µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (337.01µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (267.008µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (287.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (267.008µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (288.342µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (280.008µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (301.01µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (262.675µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (329.677µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (272.009µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (283.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (276.009µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/catalog/nodes (290.676µs) from=127.0.0.1:36948
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/agent/self (735.69µs) from=127.0.0.1:36950
2016/06/23 07:43:56 [DEBUG] http: Request PUT /v1/session/create (128.240924ms) from=127.0.0.1:36950
2016/06/23 07:43:56 [DEBUG] http: Request PUT /v1/kv/test/prefix/f25d444c-60ae-6205-6226-8bbdec13abf9?acquire=f25d444c-60ae-6205-6226-8bbdec13abf9&flags=16210313421097356768 (131.175014ms) from=127.0.0.1:36950
2016/06/23 07:43:56 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=10000ms (495.349µs) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (185.151667ms) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (764.357µs) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (628.019µs) from=127.0.0.1:36952
2016/06/23 07:43:57 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (133.119741ms) from=127.0.0.1:36952
2016/06/23 07:43:57 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (262.130022ms) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Request DELETE /v1/kv/test/prefix/f25d444c-60ae-6205-6226-8bbdec13abf9 (131.920371ms) from=127.0.0.1:36952
2016/06/23 07:43:57 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (519.683µs) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Request PUT /v1/session/destroy/f25d444c-60ae-6205-6226-8bbdec13abf9 (305.853361ms) from=127.0.0.1:36954
2016/06/23 07:43:57 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (310.476835ms) from=127.0.0.1:36950
2016/06/23 07:43:57 [DEBUG] http: Shutting down http server (127.0.0.1:10601)
--- PASS: TestLockCommand_Try_Semaphore (2.52s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Default
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (299.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (299.676µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (356.344µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (481.681µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (402.346µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (404.345µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (429.68µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (1.495379ms) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (337.344µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (356.678µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (374.011µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (307.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (298.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (344.011µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (582.351µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (447.68µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (308.342µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (364.011µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (287.342µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (290.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (285.342µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (295.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (409.346µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (537.684µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (446.347µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (483.681µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (488.682µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (300.676µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (289.342µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (294.676µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (300.343µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (296.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (278.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (274.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (284.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (276.342µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (296.009µs) from=127.0.0.1:54058
2016/06/23 07:43:58 [DEBUG] http: Request GET /v1/catalog/nodes (303.675µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (351.344µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (301.676µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (296.342µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (316.343µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (304.343µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (314.01µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (301.343µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (299.009µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (322.344µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (263.674µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (4.785146ms) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (298.009µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (294.342µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (276.008µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (291.009µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (267.008µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (259.674µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (263.674µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/catalog/nodes (314.676µs) from=127.0.0.1:54058
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/agent/self (757.357µs) from=127.0.0.1:54060
2016/06/23 07:43:59 [DEBUG] http: Request PUT /v1/session/create (133.384416ms) from=127.0.0.1:54060
2016/06/23 07:43:59 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (314.676µs) from=127.0.0.1:54060
2016/06/23 07:44:00 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=a2bf046a-c746-a9b8-78cd-9fe6a6f896cf&flags=3304740253564472344 (118.874971ms) from=127.0.0.1:54060
2016/06/23 07:44:00 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (1.471045ms) from=127.0.0.1:54060
2016/06/23 07:44:00 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=a2bf046a-c746-a9b8-78cd-9fe6a6f896cf (164.700041ms) from=127.0.0.1:54062
2016/06/23 07:44:00 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (186.172697ms) from=127.0.0.1:54060
2016/06/23 07:44:00 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (372.344µs) from=127.0.0.1:54062
2016/06/23 07:44:00 [DEBUG] http: Request PUT /v1/session/destroy/a2bf046a-c746-a9b8-78cd-9fe6a6f896cf (120.151344ms) from=127.0.0.1:54064
2016/06/23 07:44:00 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (230.900733ms) from=127.0.0.1:54062
2016/06/23 07:44:00 [DEBUG] http: Shutting down http server (127.0.0.1:10611)
--- PASS: TestLockCommand_MonitorRetry_Lock_Default (2.61s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Default
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/catalog/nodes (300.676µs) from=127.0.0.1:45086
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/catalog/nodes (278.008µs) from=127.0.0.1:45086
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/catalog/nodes (246.675µs) from=127.0.0.1:45086
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/catalog/nodes (307.676µs) from=127.0.0.1:45086
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/agent/self (750.69µs) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request PUT /v1/session/create (134.694122ms) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request PUT /v1/kv/test/prefix/084df2ef-2c39-ede9-1242-d823d7c02b82?acquire=084df2ef-2c39-ede9-1242-d823d7c02b82&flags=16210313421097356768 (129.791639ms) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (474.348µs) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (192.031877ms) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.160035ms) from=127.0.0.1:45088
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (612.019µs) from=127.0.0.1:45090
2016/06/23 07:44:02 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (134.144439ms) from=127.0.0.1:45090
2016/06/23 07:44:02 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (164.827045ms) from=127.0.0.1:45088
2016/06/23 07:44:03 [DEBUG] http: Request DELETE /v1/kv/test/prefix/084df2ef-2c39-ede9-1242-d823d7c02b82 (119.45799ms) from=127.0.0.1:45088
2016/06/23 07:44:03 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (398.012µs) from=127.0.0.1:45088
2016/06/23 07:44:03 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (275.211422ms) from=127.0.0.1:45088
2016/06/23 07:44:03 [DEBUG] http: Request PUT /v1/session/destroy/084df2ef-2c39-ede9-1242-d823d7c02b82 (279.446553ms) from=127.0.0.1:45092
2016/06/23 07:44:03 [DEBUG] http: Shutting down http server (127.0.0.1:10621)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Default (3.13s)
=== RUN   TestLockCommand_MonitorRetry_Lock_Arg
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (345.344µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (271.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (368.678µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (369.345µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (295.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (347.344µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (305.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (298.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (307.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (324.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (383.345µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (340.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (318.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (309.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (313.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (292.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (383.011µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (350.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (308.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (303.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (330.677µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (312.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (310.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (305.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (332.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (396.013µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (304.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (296.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (321.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (329.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (355.344µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (368.345µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (316.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (270.008µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (319.343µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (297.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (271.675µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (264.675µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (264.008µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (275.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (302.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (288.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (300.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (304.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (261.675µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (304.01µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (275.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (271.341µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (276.008µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (513.015µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (483.014µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (282.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (290.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (271.341µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (302.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (271.008µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (300.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (307.676µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (373.345µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (388.012µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (278.342µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (286.009µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/catalog/nodes (604.019µs) from=127.0.0.1:33720
2016/06/23 07:44:04 [DEBUG] http: Request GET /v1/agent/self (904.361µs) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request PUT /v1/session/create (114.669176ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?wait=15000ms (1.208037ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?acquire=9ce05735-d20b-86b6-8161-7feb11c9e0b8&flags=3304740253564472344 (136.952525ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent= (3.025426ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?flags=3304740253564472344&release=9ce05735-d20b-86b6-8161-7feb11c9e0b8 (183.726957ms) from=127.0.0.1:33724
2016/06/23 07:44:05 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock?consistent=&index=5 (213.992549ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (378.345µs) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Request PUT /v1/session/destroy/9ce05735-d20b-86b6-8161-7feb11c9e0b8 (235.586877ms) from=127.0.0.1:33726
2016/06/23 07:44:05 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=6 (235.403538ms) from=127.0.0.1:33722
2016/06/23 07:44:05 [DEBUG] http: Shutting down http server (127.0.0.1:10631)
--- PASS: TestLockCommand_MonitorRetry_Lock_Arg (2.09s)
=== RUN   TestLockCommand_MonitorRetry_Semaphore_Arg
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (377.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (315.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (275.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (317.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (437.014µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (334.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (305.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (309.342µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (330.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (311.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (318.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (315.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (362.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (311.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (325.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (310.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (346.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (328.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (405.679µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (340.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (335.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (316.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (323.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (946.696µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (324.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (327.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (333.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (310.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (315.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (311.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (374.678µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (294.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (285.342µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (373.345µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (313.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (308.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (378.678µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (306.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (320.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (291.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (338.01µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (317.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (333.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (438.014µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (392.012µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (420.013µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (386.345µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (281.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (408.679µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (352.677µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (402.012µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (354.678µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (345.344µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (293.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (297.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (405.679µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (399.012µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (327.343µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (283.009µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (596.685µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (627.686µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (295.676µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (483.015µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (457.681µs) from=127.0.0.1:59870
2016/06/23 07:44:06 [DEBUG] http: Request GET /v1/catalog/nodes (289.009µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (521.016µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (630.019µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (670.687µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (679.021µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (17.102524ms) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (292.009µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (274.342µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (263.008µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (569.351µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (571.351µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (558.35µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (602.685µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (512.016µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (584.685µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (522.682µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (426.013µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (460.348µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (470.015µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (403.679µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (544.017µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (584.351µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (572.684µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (579.351µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (492.682µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (505.348µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (502.682µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (541.016µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (510.682µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (490.681µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (504.015µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (504.349µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (525.683µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (490.349µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (556.35µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (488.348µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (266.008µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (310.343µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (335.01µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (318.01µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (285.676µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (323.01µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (276.008µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (271.009µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (554.351µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (288.009µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (336.677µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/catalog/nodes (347.011µs) from=127.0.0.1:59870
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/agent/self (856.693µs) from=127.0.0.1:59872
2016/06/23 07:44:07 [DEBUG] http: Request PUT /v1/session/create (134.853794ms) from=127.0.0.1:59872
2016/06/23 07:44:07 [DEBUG] http: Request PUT /v1/kv/test/prefix/765bb1e3-34c5-e35b-b913-b3864ecb7b63?acquire=765bb1e3-34c5-e35b-b913-b3864ecb7b63&flags=16210313421097356768 (130.520661ms) from=127.0.0.1:59872
2016/06/23 07:44:07 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse=&wait=15000ms (479.015µs) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=0&flags=16210313421097356768 (173.617647ms) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&recurse= (1.070033ms) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Request GET /v1/kv/test/prefix/.lock (640.687µs) from=127.0.0.1:59874
2016/06/23 07:44:08 [DEBUG] http: Request PUT /v1/kv/test/prefix/.lock?cas=6&flags=16210313421097356768 (176.423399ms) from=127.0.0.1:59874
2016/06/23 07:44:08 [DEBUG] http: Request GET /v1/kv/test/prefix?consistent=&index=6&recurse= (191.423525ms) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Request DELETE /v1/kv/test/prefix/765bb1e3-34c5-e35b-b913-b3864ecb7b63 (172.608283ms) from=127.0.0.1:59874
2016/06/23 07:44:08 [DEBUG] http: Request GET /v1/kv/test/prefix?recurse= (510.349µs) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Request PUT /v1/session/destroy/765bb1e3-34c5-e35b-b913-b3864ecb7b63 (155.718765ms) from=127.0.0.1:59876
2016/06/23 07:44:08 [DEBUG] http: Request DELETE /v1/kv/test/prefix/.lock?cas=7 (307.656749ms) from=127.0.0.1:59872
2016/06/23 07:44:08 [DEBUG] http: Shutting down http server (127.0.0.1:10641)
--- PASS: TestLockCommand_MonitorRetry_Semaphore_Arg (3.13s)
=== RUN   TestMaintCommand_implements
--- PASS: TestMaintCommand_implements (0.00s)
=== RUN   TestMaintCommandRun_ConflictingArgs
--- PASS: TestMaintCommandRun_ConflictingArgs (0.00s)
=== RUN   TestMaintCommandRun_NoArgs
2016/06/23 07:44:09 [DEBUG] http: Request GET /v1/agent/self (812.025µs) from=127.0.0.1:34152
2016/06/23 07:44:09 [DEBUG] http: Request GET /v1/agent/checks (269.008µs) from=127.0.0.1:34152
2016/06/23 07:44:09 [DEBUG] http: Shutting down http server (127.0.0.1:10651)
--- PASS: TestMaintCommandRun_NoArgs (0.94s)
=== RUN   TestMaintCommandRun_EnableNodeMaintenance
2016/06/23 07:44:10 [DEBUG] http: Request GET /v1/agent/self (791.691µs) from=127.0.0.1:60918
2016/06/23 07:44:10 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:10 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=true&reason=broken (1.524047ms) from=127.0.0.1:60918
2016/06/23 07:44:10 [DEBUG] http: Shutting down http server (127.0.0.1:10661)
--- PASS: TestMaintCommandRun_EnableNodeMaintenance (0.84s)
=== RUN   TestMaintCommandRun_DisableNodeMaintenance
2016/06/23 07:44:11 [DEBUG] http: Request GET /v1/agent/self (838.025µs) from=127.0.0.1:55738
2016/06/23 07:44:11 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:11 [DEBUG] http: Request PUT /v1/agent/maintenance?enable=false (325.677µs) from=127.0.0.1:55738
2016/06/23 07:44:11 [DEBUG] http: Shutting down http server (127.0.0.1:10671)
--- PASS: TestMaintCommandRun_DisableNodeMaintenance (0.98s)
=== RUN   TestMaintCommandRun_EnableServiceMaintenance
2016/06/23 07:44:12 [DEBUG] http: Request GET /v1/agent/self (709.689µs) from=127.0.0.1:60916
2016/06/23 07:44:12 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:12 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=true&reason=broken (2.170733ms) from=127.0.0.1:60916
2016/06/23 07:44:13 [DEBUG] http: Shutting down http server (127.0.0.1:10681)
--- PASS: TestMaintCommandRun_EnableServiceMaintenance (1.48s)
=== RUN   TestMaintCommandRun_DisableServiceMaintenance
2016/06/23 07:44:13 [DEBUG] http: Request GET /v1/agent/self (821.359µs) from=127.0.0.1:38500
2016/06/23 07:44:13 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:13 [DEBUG] http: Request PUT /v1/agent/service/maintenance/test?enable=false (300.343µs) from=127.0.0.1:38500
2016/06/23 07:44:14 [DEBUG] http: Shutting down http server (127.0.0.1:10691)
--- PASS: TestMaintCommandRun_DisableServiceMaintenance (1.13s)
=== RUN   TestMaintCommandRun_ServiceMaintenance_NoService
2016/06/23 07:44:14 [DEBUG] http: Request GET /v1/agent/self (818.025µs) from=127.0.0.1:35450
2016/06/23 07:44:14 [DEBUG] http: Request PUT /v1/agent/service/maintenance/redis?enable=true&reason=broken (130.004µs) from=127.0.0.1:35450
2016/06/23 07:44:15 [DEBUG] http: Shutting down http server (127.0.0.1:10701)
--- PASS: TestMaintCommandRun_ServiceMaintenance_NoService (1.25s)
=== RUN   TestMembersCommand_implements
--- PASS: TestMembersCommand_implements (0.00s)
=== RUN   TestMembersCommandRun
2016/06/23 07:44:16 [INFO] agent.rpc: Accepted client: 127.0.0.1:45342
2016/06/23 07:44:16 [DEBUG] http: Shutting down http server (127.0.0.1:10711)
--- PASS: TestMembersCommandRun (1.13s)
=== RUN   TestMembersCommandRun_WAN
2016/06/23 07:44:17 [INFO] agent.rpc: Accepted client: 127.0.0.1:55828
2016/06/23 07:44:18 [DEBUG] http: Shutting down http server (127.0.0.1:10721)
--- PASS: TestMembersCommandRun_WAN (1.41s)
=== RUN   TestMembersCommandRun_statusFilter
2016/06/23 07:44:18 [INFO] agent.rpc: Accepted client: 127.0.0.1:35884
2016/06/23 07:44:19 [DEBUG] http: Shutting down http server (127.0.0.1:10731)
--- PASS: TestMembersCommandRun_statusFilter (1.22s)
=== RUN   TestMembersCommandRun_statusFilter_failed
2016/06/23 07:44:20 [INFO] agent.rpc: Accepted client: 127.0.0.1:36382
2016/06/23 07:44:21 [DEBUG] http: Shutting down http server (127.0.0.1:10741)
--- PASS: TestMembersCommandRun_statusFilter_failed (1.74s)
=== RUN   TestReloadCommand_implements
--- PASS: TestReloadCommand_implements (0.00s)
=== RUN   TestReloadCommandRun
2016/06/23 07:44:21 [INFO] agent.rpc: Accepted client: 127.0.0.1:38618
2016/06/23 07:44:22 [DEBUG] http: Shutting down http server (127.0.0.1:10751)
--- PASS: TestReloadCommandRun (1.09s)
=== RUN   TestAddrFlag_default
--- PASS: TestAddrFlag_default (0.00s)
=== RUN   TestAddrFlag_onlyEnv
--- PASS: TestAddrFlag_onlyEnv (0.00s)
=== RUN   TestAddrFlag_precedence
--- PASS: TestAddrFlag_precedence (0.00s)
=== RUN   TestRTTCommand_Implements
--- PASS: TestRTTCommand_Implements (0.00s)
=== RUN   TestRTTCommand_Run_BadArgs
--- PASS: TestRTTCommand_Run_BadArgs (0.00s)
=== RUN   TestRTTCommand_Run_LAN
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (281.008µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (378.012µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (281.676µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (267.341µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (281.342µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (287.675µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (354.677µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (368.345µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (393.346µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (290.676µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (307.009µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (339.677µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (354.344µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (324.344µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (320.676µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (429.68µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (783.69µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (891.361µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (415.679µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (477.681µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (405.345µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (394.013µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (384.012µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (405.013µs) from=127.0.0.1:33114
2016/06/23 07:44:22 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:22 [DEBUG] http: Request GET /v1/catalog/nodes (400.012µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (291.343µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (439.68µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (331.01µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (292.342µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (289.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (287.009µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (285.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (299.343µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (293.676µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (283.675µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (290.342µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (342.011µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (519.683µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (375.345µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (2.28407ms) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (230.34µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (248.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (230.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (227.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (244.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (243.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (226.673µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (241.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (258.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (250.341µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (249.341µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (231.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (227.007µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (258.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (301.343µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (450.014µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (2.670415ms) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (229.673µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (396.345µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (280.676µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (246.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (238.34µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (263.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (237.34µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (299.675µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (231.341µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (224.007µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (263.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (258.675µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (259.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (284.342µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (244.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (298.343µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (236.341µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (287.009µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (260.008µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (227.34µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (226.341µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (225.674µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/catalog/nodes (283.342µs) from=127.0.0.1:33114
2016/06/23 07:44:23 [DEBUG] http: Request GET /v1/coordinate/nodes (426.013µs) from=127.0.0.1:33118
2016/06/23 07:44:24 [DEBUG] http: Shutting down http server (127.0.0.1:10761)
--- FAIL: TestRTTCommand_Run_LAN (1.96s)
	rtt_test.go:105: bad: 1: "Could not find a coordinate for node \"Node 36\"\n"
=== RUN   TestRTTCommand_Run_WAN
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (291.342µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (291.676µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (298.009µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (294.343µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (398.346µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (353.678µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (385.012µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (345.011µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (392.012µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (416.346µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (2.439741ms) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (324.344µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (376.678µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (342.011µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (670.021µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (722.022µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (370.678µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (325.676µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (266.342µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (276.342µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (281.676µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (275.675µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (261.342µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (265.008µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (257.674µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (305.676µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (249.008µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (375.678µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (303.01µs) from=127.0.0.1:42910
2016/06/23 07:44:24 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:24 [DEBUG] http: Request GET /v1/catalog/nodes (265.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (265.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (250.341µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (271.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (289.342µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (250.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (253.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (292.343µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (263.675µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (271.675µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (252.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (307.343µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (308.01µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (261.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (240.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (303.009µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (295.342µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (266.341µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (267.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (269.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (248.341µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (345.678µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (246.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (244.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (295.009µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (301.676µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (260.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (242.34µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (294.009µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (252.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (417.68µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (244.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (278.342µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (316.343µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (342.01µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (252.008µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (232.007µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (236.674µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (441.68µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (227.34µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/catalog/nodes (281.009µs) from=127.0.0.1:42910
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/coordinate/datacenters (428.346µs) from=127.0.0.1:42912
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/agent/self (885.027µs) from=127.0.0.1:42914
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/coordinate/datacenters (228.007µs) from=127.0.0.1:42914
2016/06/23 07:44:25 [DEBUG] http: Request GET /v1/coordinate/datacenters (219.34µs) from=127.0.0.1:42916
2016/06/23 07:44:25 [DEBUG] http: Shutting down http server (127.0.0.1:10771)
--- PASS: TestRTTCommand_Run_WAN (1.54s)
=== RUN   TestVersionCommand_implements
--- PASS: TestVersionCommand_implements (0.00s)
=== RUN   TestWatchCommand_implements
--- PASS: TestWatchCommand_implements (0.00s)
=== RUN   TestWatchCommandRun
2016/06/23 07:44:26 [DEBUG] http: Request GET /v1/agent/self (854.026µs) from=127.0.0.1:52894
2016/06/23 07:44:26 [ERR] http: Request GET /v1/catalog/nodes, error: No cluster leader from=127.0.0.1:52896
2016/06/23 07:44:26 [DEBUG] http: Request GET /v1/catalog/nodes (417.679µs) from=127.0.0.1:52896
2016/06/23 07:44:26 consul.watch: Watch (type: nodes) errored: Unexpected response code: 500 (No cluster leader), retry in 5s
2016/06/23 07:44:31 [DEBUG] http: Request GET /v1/catalog/nodes (315.677µs) from=127.0.0.1:52896
2016/06/23 07:44:31 [DEBUG] http: Shutting down http server (127.0.0.1:10781)
--- PASS: TestWatchCommandRun (5.83s)
FAIL
FAIL	github.com/hashicorp/consul/command	71.158s
=== RUN   TestACLUpdate
2016/06/23 07:44:18 [INFO] serf: EventMemberJoin: Node 1 127.0.0.1
2016/06/23 07:44:18 [INFO] raft: Node at 127.0.0.1:18001 [Follower] entering Follower state
2016/06/23 07:44:18 [INFO] consul: adding LAN server Node 1 (Addr: 127.0.0.1:18001) (DC: dc1)
2016/06/23 07:44:18 [INFO] serf: EventMemberJoin: Node 1.dc1 127.0.0.1
2016/06/23 07:44:18 [INFO] consul: adding WAN server Node 1.dc1 (Addr: 127.0.0.1:18001) (DC: dc1)
2016/06/23 07:44:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:18 [INFO] raft: Node at 127.0.0.1:18001 [Candidate] entering Candidate state
2016/06/23 07:44:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:18 [DEBUG] raft: Vote granted from 127.0.0.1:18001. Tally: 1
2016/06/23 07:44:18 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:18 [INFO] raft: Node at 127.0.0.1:18001 [Leader] entering Leader state
2016/06/23 07:44:18 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:18 [INFO] consul: New leader elected: Node 1
2016/06/23 07:44:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:19 [DEBUG] raft: Node 127.0.0.1:18001 updated peer set (2): [127.0.0.1:18001]
2016/06/23 07:44:19 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:20 [INFO] consul: member 'Node 1' joined, marking health alive
2016/06/23 07:44:21 [INFO] agent: requesting shutdown
2016/06/23 07:44:21 [INFO] consul: shutting down server
2016/06/23 07:44:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:21 [INFO] agent: shutdown complete
2016/06/23 07:44:21 [DEBUG] http: Shutting down http server (127.0.0.1:18801)
--- PASS: TestACLUpdate (3.30s)
=== RUN   TestACLUpdate_Upsert
2016/06/23 07:44:21 [INFO] raft: Node at 127.0.0.1:18002 [Follower] entering Follower state
2016/06/23 07:44:21 [INFO] serf: EventMemberJoin: Node 2 127.0.0.1
2016/06/23 07:44:21 [INFO] consul: adding LAN server Node 2 (Addr: 127.0.0.1:18002) (DC: dc1)
2016/06/23 07:44:21 [INFO] serf: EventMemberJoin: Node 2.dc1 127.0.0.1
2016/06/23 07:44:21 [INFO] consul: adding WAN server Node 2.dc1 (Addr: 127.0.0.1:18002) (DC: dc1)
2016/06/23 07:44:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:21 [INFO] raft: Node at 127.0.0.1:18002 [Candidate] entering Candidate state
2016/06/23 07:44:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:22 [DEBUG] raft: Vote granted from 127.0.0.1:18002. Tally: 1
2016/06/23 07:44:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:22 [INFO] raft: Node at 127.0.0.1:18002 [Leader] entering Leader state
2016/06/23 07:44:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:22 [INFO] consul: New leader elected: Node 2
2016/06/23 07:44:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:22 [DEBUG] raft: Node 127.0.0.1:18002 updated peer set (2): [127.0.0.1:18002]
2016/06/23 07:44:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:23 [INFO] consul: member 'Node 2' joined, marking health alive
2016/06/23 07:44:23 [INFO] agent: requesting shutdown
2016/06/23 07:44:23 [INFO] consul: shutting down server
2016/06/23 07:44:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:23 [INFO] agent: shutdown complete
2016/06/23 07:44:23 [DEBUG] http: Shutting down http server (127.0.0.1:18802)
--- PASS: TestACLUpdate_Upsert (2.45s)
=== RUN   TestACLDestroy
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:18003 [Follower] entering Follower state
2016/06/23 07:44:24 [INFO] serf: EventMemberJoin: Node 3 127.0.0.1
2016/06/23 07:44:24 [INFO] consul: adding LAN server Node 3 (Addr: 127.0.0.1:18003) (DC: dc1)
2016/06/23 07:44:24 [INFO] serf: EventMemberJoin: Node 3.dc1 127.0.0.1
2016/06/23 07:44:24 [INFO] consul: adding WAN server Node 3.dc1 (Addr: 127.0.0.1:18003) (DC: dc1)
2016/06/23 07:44:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:18003 [Candidate] entering Candidate state
2016/06/23 07:44:24 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:24 [DEBUG] raft: Vote granted from 127.0.0.1:18003. Tally: 1
2016/06/23 07:44:24 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:18003 [Leader] entering Leader state
2016/06/23 07:44:24 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:24 [INFO] consul: New leader elected: Node 3
2016/06/23 07:44:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:25 [DEBUG] raft: Node 127.0.0.1:18003 updated peer set (2): [127.0.0.1:18003]
2016/06/23 07:44:25 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:25 [INFO] consul: member 'Node 3' joined, marking health alive
2016/06/23 07:44:26 [INFO] agent: requesting shutdown
2016/06/23 07:44:26 [INFO] consul: shutting down server
2016/06/23 07:44:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:26 [INFO] agent: shutdown complete
2016/06/23 07:44:26 [DEBUG] http: Shutting down http server (127.0.0.1:18803)
--- PASS: TestACLDestroy (2.68s)
=== RUN   TestACLClone
2016/06/23 07:44:26 [INFO] raft: Node at 127.0.0.1:18004 [Follower] entering Follower state
2016/06/23 07:44:26 [INFO] serf: EventMemberJoin: Node 4 127.0.0.1
2016/06/23 07:44:26 [INFO] consul: adding LAN server Node 4 (Addr: 127.0.0.1:18004) (DC: dc1)
2016/06/23 07:44:26 [INFO] serf: EventMemberJoin: Node 4.dc1 127.0.0.1
2016/06/23 07:44:26 [INFO] consul: adding WAN server Node 4.dc1 (Addr: 127.0.0.1:18004) (DC: dc1)
2016/06/23 07:44:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:27 [INFO] raft: Node at 127.0.0.1:18004 [Candidate] entering Candidate state
2016/06/23 07:44:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:27 [DEBUG] raft: Vote granted from 127.0.0.1:18004. Tally: 1
2016/06/23 07:44:27 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:27 [INFO] raft: Node at 127.0.0.1:18004 [Leader] entering Leader state
2016/06/23 07:44:27 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:27 [INFO] consul: New leader elected: Node 4
2016/06/23 07:44:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:27 [DEBUG] raft: Node 127.0.0.1:18004 updated peer set (2): [127.0.0.1:18004]
2016/06/23 07:44:27 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:28 [INFO] consul: member 'Node 4' joined, marking health alive
2016/06/23 07:44:29 [INFO] agent: requesting shutdown
2016/06/23 07:44:29 [INFO] consul: shutting down server
2016/06/23 07:44:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:29 [INFO] agent: shutdown complete
2016/06/23 07:44:29 [DEBUG] http: Shutting down http server (127.0.0.1:18804)
--- PASS: TestACLClone (2.97s)
=== RUN   TestACLGet
2016/06/23 07:44:30 [INFO] raft: Node at 127.0.0.1:18005 [Follower] entering Follower state
2016/06/23 07:44:30 [INFO] serf: EventMemberJoin: Node 5 127.0.0.1
2016/06/23 07:44:30 [INFO] consul: adding LAN server Node 5 (Addr: 127.0.0.1:18005) (DC: dc1)
2016/06/23 07:44:30 [INFO] serf: EventMemberJoin: Node 5.dc1 127.0.0.1
2016/06/23 07:44:30 [INFO] consul: adding WAN server Node 5.dc1 (Addr: 127.0.0.1:18005) (DC: dc1)
2016/06/23 07:44:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:30 [INFO] raft: Node at 127.0.0.1:18005 [Candidate] entering Candidate state
2016/06/23 07:44:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:30 [DEBUG] raft: Vote granted from 127.0.0.1:18005. Tally: 1
2016/06/23 07:44:30 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:30 [INFO] raft: Node at 127.0.0.1:18005 [Leader] entering Leader state
2016/06/23 07:44:30 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:30 [INFO] consul: New leader elected: Node 5
2016/06/23 07:44:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:31 [DEBUG] raft: Node 127.0.0.1:18005 updated peer set (2): [127.0.0.1:18005]
2016/06/23 07:44:31 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:31 [INFO] consul: member 'Node 5' joined, marking health alive
2016/06/23 07:44:31 [INFO] agent: requesting shutdown
2016/06/23 07:44:31 [INFO] consul: shutting down server
2016/06/23 07:44:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:31 [INFO] agent: shutdown complete
2016/06/23 07:44:31 [DEBUG] http: Shutting down http server (127.0.0.1:18805)
2016/06/23 07:44:32 [INFO] raft: Node at 127.0.0.1:18006 [Follower] entering Follower state
2016/06/23 07:44:32 [INFO] serf: EventMemberJoin: Node 6 127.0.0.1
2016/06/23 07:44:32 [INFO] consul: adding LAN server Node 6 (Addr: 127.0.0.1:18006) (DC: dc1)
2016/06/23 07:44:32 [INFO] serf: EventMemberJoin: Node 6.dc1 127.0.0.1
2016/06/23 07:44:32 [INFO] consul: adding WAN server Node 6.dc1 (Addr: 127.0.0.1:18006) (DC: dc1)
2016/06/23 07:44:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:32 [INFO] raft: Node at 127.0.0.1:18006 [Candidate] entering Candidate state
2016/06/23 07:44:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:33 [DEBUG] raft: Vote granted from 127.0.0.1:18006. Tally: 1
2016/06/23 07:44:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:33 [INFO] raft: Node at 127.0.0.1:18006 [Leader] entering Leader state
2016/06/23 07:44:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:33 [INFO] consul: New leader elected: Node 6
2016/06/23 07:44:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:34 [DEBUG] raft: Node 127.0.0.1:18006 updated peer set (2): [127.0.0.1:18006]
2016/06/23 07:44:34 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:35 [INFO] consul: member 'Node 6' joined, marking health alive
2016/06/23 07:44:35 [INFO] agent: requesting shutdown
2016/06/23 07:44:35 [INFO] consul: shutting down server
2016/06/23 07:44:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:36 [INFO] agent: shutdown complete
2016/06/23 07:44:36 [DEBUG] http: Shutting down http server (127.0.0.1:18806)
--- PASS: TestACLGet (6.69s)
=== RUN   TestACLList
2016/06/23 07:44:36 [INFO] raft: Node at 127.0.0.1:18007 [Follower] entering Follower state
2016/06/23 07:44:36 [INFO] serf: EventMemberJoin: Node 7 127.0.0.1
2016/06/23 07:44:36 [INFO] consul: adding LAN server Node 7 (Addr: 127.0.0.1:18007) (DC: dc1)
2016/06/23 07:44:36 [INFO] serf: EventMemberJoin: Node 7.dc1 127.0.0.1
2016/06/23 07:44:36 [INFO] consul: adding WAN server Node 7.dc1 (Addr: 127.0.0.1:18007) (DC: dc1)
2016/06/23 07:44:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:36 [INFO] raft: Node at 127.0.0.1:18007 [Candidate] entering Candidate state
2016/06/23 07:44:37 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:37 [DEBUG] raft: Vote granted from 127.0.0.1:18007. Tally: 1
2016/06/23 07:44:37 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:37 [INFO] raft: Node at 127.0.0.1:18007 [Leader] entering Leader state
2016/06/23 07:44:37 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:37 [INFO] consul: New leader elected: Node 7
2016/06/23 07:44:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:37 [DEBUG] raft: Node 127.0.0.1:18007 updated peer set (2): [127.0.0.1:18007]
2016/06/23 07:44:37 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:38 [INFO] consul: member 'Node 7' joined, marking health alive
2016/06/23 07:44:40 [INFO] agent: requesting shutdown
2016/06/23 07:44:40 [INFO] consul: shutting down server
2016/06/23 07:44:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:40 [INFO] agent: shutdown complete
2016/06/23 07:44:40 [DEBUG] http: Shutting down http server (127.0.0.1:18807)
--- PASS: TestACLList (4.96s)
=== RUN   TestHTTPAgentServices
2016/06/23 07:44:41 [INFO] serf: EventMemberJoin: Node 8 127.0.0.1
2016/06/23 07:44:41 [INFO] raft: Node at 127.0.0.1:18008 [Follower] entering Follower state
2016/06/23 07:44:41 [INFO] consul: adding LAN server Node 8 (Addr: 127.0.0.1:18008) (DC: dc1)
2016/06/23 07:44:41 [INFO] serf: EventMemberJoin: Node 8.dc1 127.0.0.1
2016/06/23 07:44:41 [INFO] consul: adding WAN server Node 8.dc1 (Addr: 127.0.0.1:18008) (DC: dc1)
2016/06/23 07:44:41 [INFO] agent: requesting shutdown
2016/06/23 07:44:41 [INFO] consul: shutting down server
2016/06/23 07:44:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:42 [INFO] raft: Node at 127.0.0.1:18008 [Candidate] entering Candidate state
2016/06/23 07:44:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:42 [INFO] agent: shutdown complete
2016/06/23 07:44:42 [DEBUG] http: Shutting down http server (127.0.0.1:18808)
--- PASS: TestHTTPAgentServices (1.93s)
=== RUN   TestHTTPAgentChecks
2016/06/23 07:44:43 [INFO] raft: Node at 127.0.0.1:18009 [Follower] entering Follower state
2016/06/23 07:44:43 [INFO] serf: EventMemberJoin: Node 9 127.0.0.1
2016/06/23 07:44:43 [INFO] consul: adding LAN server Node 9 (Addr: 127.0.0.1:18009) (DC: dc1)
2016/06/23 07:44:43 [INFO] serf: EventMemberJoin: Node 9.dc1 127.0.0.1
2016/06/23 07:44:43 [INFO] consul: adding WAN server Node 9.dc1 (Addr: 127.0.0.1:18009) (DC: dc1)
2016/06/23 07:44:43 [INFO] agent: requesting shutdown
2016/06/23 07:44:43 [INFO] consul: shutting down server
2016/06/23 07:44:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:43 [INFO] raft: Node at 127.0.0.1:18009 [Candidate] entering Candidate state
2016/06/23 07:44:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:43 [INFO] agent: shutdown complete
2016/06/23 07:44:43 [DEBUG] http: Shutting down http server (127.0.0.1:18809)
--- PASS: TestHTTPAgentChecks (1.00s)
=== RUN   TestHTTPAgentSelf
2016/06/23 07:44:44 [INFO] raft: Node at 127.0.0.1:18010 [Follower] entering Follower state
2016/06/23 07:44:44 [INFO] serf: EventMemberJoin: Node 10 127.0.0.1
2016/06/23 07:44:44 [INFO] serf: EventMemberJoin: Node 10.dc1 127.0.0.1
2016/06/23 07:44:44 [INFO] agent: requesting shutdown
2016/06/23 07:44:44 [INFO] consul: shutting down server
2016/06/23 07:44:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:44 [INFO] consul: adding LAN server Node 10 (Addr: 127.0.0.1:18010) (DC: dc1)
2016/06/23 07:44:44 [INFO] consul: adding WAN server Node 10.dc1 (Addr: 127.0.0.1:18010) (DC: dc1)
2016/06/23 07:44:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:44 [INFO] raft: Node at 127.0.0.1:18010 [Candidate] entering Candidate state
2016/06/23 07:44:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:45 [INFO] agent: shutdown complete
2016/06/23 07:44:45 [DEBUG] http: Shutting down http server (127.0.0.1:18810)
--- PASS: TestHTTPAgentSelf (1.26s)
=== RUN   TestHTTPAgentMembers
2016/06/23 07:44:45 [INFO] raft: Node at 127.0.0.1:18011 [Follower] entering Follower state
2016/06/23 07:44:45 [INFO] serf: EventMemberJoin: Node 11 127.0.0.1
2016/06/23 07:44:45 [INFO] consul: adding LAN server Node 11 (Addr: 127.0.0.1:18011) (DC: dc1)
2016/06/23 07:44:45 [INFO] serf: EventMemberJoin: Node 11.dc1 127.0.0.1
2016/06/23 07:44:45 [INFO] agent: requesting shutdown
2016/06/23 07:44:45 [INFO] consul: shutting down server
2016/06/23 07:44:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:45 [INFO] consul: adding WAN server Node 11.dc1 (Addr: 127.0.0.1:18011) (DC: dc1)
2016/06/23 07:44:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:45 [INFO] raft: Node at 127.0.0.1:18011 [Candidate] entering Candidate state
2016/06/23 07:44:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:46 [INFO] agent: shutdown complete
2016/06/23 07:44:46 [DEBUG] http: Shutting down http server (127.0.0.1:18811)
--- PASS: TestHTTPAgentMembers (1.23s)
=== RUN   TestHTTPAgentMembers_WAN
2016/06/23 07:44:46 [INFO] raft: Node at 127.0.0.1:18012 [Follower] entering Follower state
2016/06/23 07:44:46 [INFO] serf: EventMemberJoin: Node 12 127.0.0.1
2016/06/23 07:44:46 [INFO] serf: EventMemberJoin: Node 12.dc1 127.0.0.1
2016/06/23 07:44:46 [INFO] agent: requesting shutdown
2016/06/23 07:44:46 [INFO] consul: shutting down server
2016/06/23 07:44:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:46 [INFO] raft: Node at 127.0.0.1:18012 [Candidate] entering Candidate state
2016/06/23 07:44:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:47 [INFO] agent: shutdown complete
2016/06/23 07:44:47 [DEBUG] http: Shutting down http server (127.0.0.1:18812)
--- PASS: TestHTTPAgentMembers_WAN (1.47s)
=== RUN   TestHTTPAgentJoin
2016/06/23 07:44:48 [INFO] serf: EventMemberJoin: Node 13 127.0.0.1
2016/06/23 07:44:48 [INFO] serf: EventMemberJoin: Node 13.dc1 127.0.0.1
2016/06/23 07:44:48 [INFO] consul: adding LAN server Node 13 (Addr: 127.0.0.1:18013) (DC: dc1)
2016/06/23 07:44:48 [INFO] consul: adding WAN server Node 13.dc1 (Addr: 127.0.0.1:18013) (DC: dc1)
2016/06/23 07:44:48 [INFO] raft: Node at 127.0.0.1:18013 [Follower] entering Follower state
2016/06/23 07:44:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:48 [INFO] raft: Node at 127.0.0.1:18013 [Candidate] entering Candidate state
2016/06/23 07:44:49 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:49 [DEBUG] raft: Vote granted from 127.0.0.1:18013. Tally: 1
2016/06/23 07:44:49 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:49 [INFO] raft: Node at 127.0.0.1:18013 [Leader] entering Leader state
2016/06/23 07:44:49 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:49 [INFO] consul: New leader elected: Node 13
2016/06/23 07:44:49 [INFO] raft: Node at 127.0.0.1:18014 [Follower] entering Follower state
2016/06/23 07:44:49 [INFO] serf: EventMemberJoin: Node 14 127.0.0.1
2016/06/23 07:44:49 [INFO] serf: EventMemberJoin: Node 14.dc1 127.0.0.1
2016/06/23 07:44:49 [INFO] agent: (LAN) joining: [127.0.0.1:18214]
2016/06/23 07:44:49 [INFO] consul: adding WAN server Node 14.dc1 (Addr: 127.0.0.1:18014) (DC: dc1)
2016/06/23 07:44:49 [DEBUG] memberlist: TCP connection from=127.0.0.1:39296
2016/06/23 07:44:49 [INFO] consul: adding LAN server Node 14 (Addr: 127.0.0.1:18014) (DC: dc1)
2016/06/23 07:44:49 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18214
2016/06/23 07:44:49 [INFO] serf: EventMemberJoin: Node 13 127.0.0.1
2016/06/23 07:44:49 [INFO] consul: adding LAN server Node 13 (Addr: 127.0.0.1:18013) (DC: dc1)
2016/06/23 07:44:49 [INFO] consul: New leader elected: Node 13
2016/06/23 07:44:49 [INFO] serf: EventMemberJoin: Node 14 127.0.0.1
2016/06/23 07:44:49 [INFO] consul: adding LAN server Node 14 (Addr: 127.0.0.1:18014) (DC: dc1)
2016/06/23 07:44:49 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/06/23 07:44:49 [INFO] agent: requesting shutdown
2016/06/23 07:44:49 [INFO] consul: shutting down server
2016/06/23 07:44:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:49 [INFO] raft: Node at 127.0.0.1:18014 [Candidate] entering Candidate state
2016/06/23 07:44:49 [INFO] memberlist: Suspect Node 14 has failed, no acks received
2016/06/23 07:44:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:50 [DEBUG] raft: Node 127.0.0.1:18013 updated peer set (2): [127.0.0.1:18013]
2016/06/23 07:44:50 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:50 [INFO] memberlist: Suspect Node 14 has failed, no acks received
2016/06/23 07:44:50 [INFO] memberlist: Marking Node 14 as failed, suspect timeout reached
2016/06/23 07:44:50 [INFO] serf: EventMemberFailed: Node 14 127.0.0.1
2016/06/23 07:44:50 [INFO] consul: removing LAN server Node 14 (Addr: 127.0.0.1:18014) (DC: dc1)
2016/06/23 07:44:50 [INFO] memberlist: Suspect Node 14 has failed, no acks received
2016/06/23 07:44:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:50 [INFO] agent: shutdown complete
2016/06/23 07:44:50 [INFO] agent: requesting shutdown
2016/06/23 07:44:50 [INFO] consul: shutting down server
2016/06/23 07:44:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:50 [INFO] consul: member 'Node 13' joined, marking health alive
2016/06/23 07:44:50 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:44:50 [ERR] consul: failed to reconcile member: {Node 13 127.0.0.1 18213 map[build:a.bc.d: port:18013 bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:44:50 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:44:50 [INFO] agent: shutdown complete
2016/06/23 07:44:50 [DEBUG] http: Shutting down http server (127.0.0.1:18813)
--- PASS: TestHTTPAgentJoin (2.85s)
=== RUN   TestHTTPAgentJoin_WAN
2016/06/23 07:44:51 [INFO] raft: Node at 127.0.0.1:18015 [Follower] entering Follower state
2016/06/23 07:44:51 [INFO] serf: EventMemberJoin: Node 15 127.0.0.1
2016/06/23 07:44:51 [INFO] consul: adding LAN server Node 15 (Addr: 127.0.0.1:18015) (DC: dc1)
2016/06/23 07:44:51 [INFO] serf: EventMemberJoin: Node 15.dc1 127.0.0.1
2016/06/23 07:44:51 [INFO] consul: adding WAN server Node 15.dc1 (Addr: 127.0.0.1:18015) (DC: dc1)
2016/06/23 07:44:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:51 [INFO] raft: Node at 127.0.0.1:18015 [Candidate] entering Candidate state
2016/06/23 07:44:52 [INFO] raft: Node at 127.0.0.1:18016 [Follower] entering Follower state
2016/06/23 07:44:52 [INFO] serf: EventMemberJoin: Node 16 127.0.0.1
2016/06/23 07:44:52 [INFO] consul: adding LAN server Node 16 (Addr: 127.0.0.1:18016) (DC: dc1)
2016/06/23 07:44:52 [INFO] serf: EventMemberJoin: Node 16.dc1 127.0.0.1
2016/06/23 07:44:52 [INFO] consul: adding WAN server Node 16.dc1 (Addr: 127.0.0.1:18016) (DC: dc1)
2016/06/23 07:44:52 [INFO] agent: (WAN) joining: [127.0.0.1:18416]
2016/06/23 07:44:52 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18416
2016/06/23 07:44:52 [DEBUG] memberlist: TCP connection from=127.0.0.1:59426
2016/06/23 07:44:52 [INFO] serf: EventMemberJoin: Node 15.dc1 127.0.0.1
2016/06/23 07:44:52 [INFO] serf: EventMemberJoin: Node 16.dc1 127.0.0.1
2016/06/23 07:44:52 [INFO] consul: adding WAN server Node 15.dc1 (Addr: 127.0.0.1:18015) (DC: dc1)
2016/06/23 07:44:52 [INFO] agent: (WAN) joined: 1 Err: <nil>
2016/06/23 07:44:52 [INFO] consul: adding WAN server Node 16.dc1 (Addr: 127.0.0.1:18016) (DC: dc1)
2016/06/23 07:44:52 [INFO] agent: requesting shutdown
2016/06/23 07:44:52 [INFO] consul: shutting down server
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:52 [INFO] raft: Node at 127.0.0.1:18016 [Candidate] entering Candidate state
2016/06/23 07:44:52 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:52 [DEBUG] raft: Vote granted from 127.0.0.1:18015. Tally: 1
2016/06/23 07:44:52 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:52 [INFO] raft: Node at 127.0.0.1:18015 [Leader] entering Leader state
2016/06/23 07:44:52 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:52 [INFO] consul: New leader elected: Node 15
2016/06/23 07:44:52 [DEBUG] serf: messageJoinType: Node 15.dc1
2016/06/23 07:44:52 [DEBUG] serf: messageJoinType: Node 15.dc1
2016/06/23 07:44:52 [DEBUG] serf: messageJoinType: Node 15.dc1
2016/06/23 07:44:52 [DEBUG] serf: messageJoinType: Node 15.dc1
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:52 [DEBUG] raft: Node 127.0.0.1:18015 updated peer set (2): [127.0.0.1:18015]
2016/06/23 07:44:52 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:52 [INFO] memberlist: Suspect Node 16.dc1 has failed, no acks received
2016/06/23 07:44:52 [INFO] memberlist: Suspect Node 16.dc1 has failed, no acks received
2016/06/23 07:44:52 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:52 [INFO] agent: shutdown complete
2016/06/23 07:44:52 [INFO] agent: requesting shutdown
2016/06/23 07:44:52 [INFO] consul: shutting down server
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [INFO] memberlist: Suspect Node 16.dc1 has failed, no acks received
2016/06/23 07:44:52 [INFO] memberlist: Marking Node 16.dc1 as failed, suspect timeout reached
2016/06/23 07:44:52 [INFO] serf: EventMemberFailed: Node 16.dc1 127.0.0.1
2016/06/23 07:44:52 [INFO] consul: member 'Node 15' joined, marking health alive
2016/06/23 07:44:53 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:44:53 [ERR] consul: failed to reconcile member: {Node 15 127.0.0.1 18215 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:a.bc.d: port:18015 bootstrap:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:44:53 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:44:53 [INFO] agent: shutdown complete
2016/06/23 07:44:53 [DEBUG] http: Shutting down http server (127.0.0.1:18815)
--- PASS: TestHTTPAgentJoin_WAN (2.43s)
=== RUN   TestHTTPAgentForceLeave
2016/06/23 07:44:53 [INFO] raft: Node at 127.0.0.1:18017 [Follower] entering Follower state
2016/06/23 07:44:53 [INFO] serf: EventMemberJoin: Node 17 127.0.0.1
2016/06/23 07:44:53 [INFO] consul: adding LAN server Node 17 (Addr: 127.0.0.1:18017) (DC: dc1)
2016/06/23 07:44:53 [INFO] serf: EventMemberJoin: Node 17.dc1 127.0.0.1
2016/06/23 07:44:53 [INFO] consul: adding WAN server Node 17.dc1 (Addr: 127.0.0.1:18017) (DC: dc1)
2016/06/23 07:44:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:53 [INFO] raft: Node at 127.0.0.1:18017 [Candidate] entering Candidate state
2016/06/23 07:44:54 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:54 [DEBUG] raft: Vote granted from 127.0.0.1:18017. Tally: 1
2016/06/23 07:44:54 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:54 [INFO] raft: Node at 127.0.0.1:18017 [Leader] entering Leader state
2016/06/23 07:44:54 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:54 [INFO] consul: New leader elected: Node 17
2016/06/23 07:44:54 [INFO] serf: EventMemberJoin: Node 18 127.0.0.1
2016/06/23 07:44:54 [INFO] raft: Node at 127.0.0.1:18018 [Follower] entering Follower state
2016/06/23 07:44:54 [INFO] consul: adding LAN server Node 18 (Addr: 127.0.0.1:18018) (DC: dc1)
2016/06/23 07:44:54 [INFO] serf: EventMemberJoin: Node 18.dc1 127.0.0.1
2016/06/23 07:44:54 [INFO] consul: adding WAN server Node 18.dc1 (Addr: 127.0.0.1:18018) (DC: dc1)
2016/06/23 07:44:54 [INFO] agent: (LAN) joining: [127.0.0.1:18218]
2016/06/23 07:44:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18218
2016/06/23 07:44:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:44644
2016/06/23 07:44:54 [INFO] serf: EventMemberJoin: Node 17 127.0.0.1
2016/06/23 07:44:54 [INFO] serf: EventMemberJoin: Node 18 127.0.0.1
2016/06/23 07:44:54 [INFO] consul: adding LAN server Node 17 (Addr: 127.0.0.1:18017) (DC: dc1)
2016/06/23 07:44:54 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/06/23 07:44:54 [INFO] agent: requesting shutdown
2016/06/23 07:44:54 [INFO] consul: shutting down server
2016/06/23 07:44:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:54 [INFO] consul: adding LAN server Node 18 (Addr: 127.0.0.1:18018) (DC: dc1)
2016/06/23 07:44:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:54 [INFO] raft: Node at 127.0.0.1:18018 [Candidate] entering Candidate state
2016/06/23 07:44:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:54 [DEBUG] raft: Node 127.0.0.1:18017 updated peer set (2): [127.0.0.1:18017]
2016/06/23 07:44:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:54 [INFO] memberlist: Suspect Node 18 has failed, no acks received
2016/06/23 07:44:54 [INFO] memberlist: Suspect Node 18 has failed, no acks received
2016/06/23 07:44:55 [INFO] memberlist: Marking Node 18 as failed, suspect timeout reached
2016/06/23 07:44:55 [INFO] serf: EventMemberFailed: Node 18 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: removing LAN server Node 18 (Addr: 127.0.0.1:18018) (DC: dc1)
2016/06/23 07:44:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:55 [INFO] agent: shutdown complete
2016/06/23 07:44:55 [INFO] Force leaving node: Node 18
2016/06/23 07:44:55 [INFO] serf: EventMemberLeave (forced): Node 18 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: removing LAN server Node 18 (Addr: 127.0.0.1:18018) (DC: dc1)
2016/06/23 07:44:55 [INFO] agent: requesting shutdown
2016/06/23 07:44:55 [INFO] consul: shutting down server
2016/06/23 07:44:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:55 [ERR] consul: ACL initialization failed: failed to create master token: raft is already shutdown
2016/06/23 07:44:55 [ERR] consul: failed to establish leadership: failed to create master token: raft is already shutdown
2016/06/23 07:44:55 [INFO] agent: shutdown complete
2016/06/23 07:44:55 [DEBUG] http: Shutting down http server (127.0.0.1:18817)
--- PASS: TestHTTPAgentForceLeave (2.09s)
=== RUN   TestHTTPAgentRegisterCheck
2016/06/23 07:44:55 [INFO] raft: Node at 127.0.0.1:18019 [Follower] entering Follower state
2016/06/23 07:44:55 [INFO] serf: EventMemberJoin: Node 19 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: adding LAN server Node 19 (Addr: 127.0.0.1:18019) (DC: dc1)
2016/06/23 07:44:55 [INFO] serf: EventMemberJoin: Node 19.dc1 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: adding WAN server Node 19.dc1 (Addr: 127.0.0.1:18019) (DC: dc1)
2016/06/23 07:44:55 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:55 [INFO] agent: requesting shutdown
2016/06/23 07:44:55 [INFO] consul: shutting down server
2016/06/23 07:44:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:55 [INFO] raft: Node at 127.0.0.1:18019 [Candidate] entering Candidate state
2016/06/23 07:44:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:56 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:56 [INFO] agent: shutdown complete
2016/06/23 07:44:56 [DEBUG] http: Shutting down http server (127.0.0.1:18819)
--- PASS: TestHTTPAgentRegisterCheck (1.13s)
=== RUN   TestHTTPAgentRegisterCheckPassing
2016/06/23 07:44:57 [INFO] raft: Node at 127.0.0.1:18020 [Follower] entering Follower state
2016/06/23 07:44:57 [INFO] serf: EventMemberJoin: Node 20 127.0.0.1
2016/06/23 07:44:57 [INFO] consul: adding LAN server Node 20 (Addr: 127.0.0.1:18020) (DC: dc1)
2016/06/23 07:44:57 [INFO] serf: EventMemberJoin: Node 20.dc1 127.0.0.1
2016/06/23 07:44:57 [INFO] consul: adding WAN server Node 20.dc1 (Addr: 127.0.0.1:18020) (DC: dc1)
2016/06/23 07:44:57 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:44:57 [INFO] agent: requesting shutdown
2016/06/23 07:44:57 [INFO] consul: shutting down server
2016/06/23 07:44:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:57 [INFO] raft: Node at 127.0.0.1:18020 [Candidate] entering Candidate state
2016/06/23 07:44:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:57 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:57 [INFO] agent: shutdown complete
2016/06/23 07:44:57 [DEBUG] http: Shutting down http server (127.0.0.1:18820)
--- PASS: TestHTTPAgentRegisterCheckPassing (1.33s)
=== RUN   TestHTTPAgentRegisterCheckBadStatus
2016/06/23 07:44:58 [INFO] raft: Node at 127.0.0.1:18021 [Follower] entering Follower state
2016/06/23 07:44:58 [INFO] serf: EventMemberJoin: Node 21 127.0.0.1
2016/06/23 07:44:58 [INFO] consul: adding LAN server Node 21 (Addr: 127.0.0.1:18021) (DC: dc1)
2016/06/23 07:44:58 [INFO] serf: EventMemberJoin: Node 21.dc1 127.0.0.1
2016/06/23 07:44:58 [INFO] consul: adding WAN server Node 21.dc1 (Addr: 127.0.0.1:18021) (DC: dc1)
2016/06/23 07:44:58 [INFO] agent: requesting shutdown
2016/06/23 07:44:58 [INFO] consul: shutting down server
2016/06/23 07:44:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:58 [INFO] raft: Node at 127.0.0.1:18021 [Candidate] entering Candidate state
2016/06/23 07:44:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:59 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:59 [INFO] agent: shutdown complete
2016/06/23 07:44:59 [DEBUG] http: Shutting down http server (127.0.0.1:18821)
--- PASS: TestHTTPAgentRegisterCheckBadStatus (1.75s)
=== RUN   TestHTTPAgentDeregisterCheck
2016/06/23 07:45:00 [INFO] raft: Node at 127.0.0.1:18022 [Follower] entering Follower state
2016/06/23 07:45:00 [INFO] serf: EventMemberJoin: Node 22 127.0.0.1
2016/06/23 07:45:00 [INFO] consul: adding LAN server Node 22 (Addr: 127.0.0.1:18022) (DC: dc1)
2016/06/23 07:45:00 [INFO] serf: EventMemberJoin: Node 22.dc1 127.0.0.1
2016/06/23 07:45:00 [DEBUG] agent: removed check "test"
2016/06/23 07:45:00 [INFO] consul: adding WAN server Node 22.dc1 (Addr: 127.0.0.1:18022) (DC: dc1)
2016/06/23 07:45:00 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:00 [INFO] agent: requesting shutdown
2016/06/23 07:45:00 [INFO] consul: shutting down server
2016/06/23 07:45:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:00 [INFO] raft: Node at 127.0.0.1:18022 [Candidate] entering Candidate state
2016/06/23 07:45:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:00 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:00 [INFO] agent: shutdown complete
2016/06/23 07:45:00 [DEBUG] http: Shutting down http server (127.0.0.1:18822)
--- PASS: TestHTTPAgentDeregisterCheck (1.53s)
=== RUN   TestHTTPAgentPassCheck
2016/06/23 07:45:01 [INFO] raft: Node at 127.0.0.1:18023 [Follower] entering Follower state
2016/06/23 07:45:01 [INFO] serf: EventMemberJoin: Node 23 127.0.0.1
2016/06/23 07:45:01 [INFO] consul: adding LAN server Node 23 (Addr: 127.0.0.1:18023) (DC: dc1)
2016/06/23 07:45:01 [INFO] serf: EventMemberJoin: Node 23.dc1 127.0.0.1
2016/06/23 07:45:01 [INFO] consul: adding WAN server Node 23.dc1 (Addr: 127.0.0.1:18023) (DC: dc1)
2016/06/23 07:45:01 [DEBUG] agent: Check 'test' status is now passing
2016/06/23 07:45:01 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:01 [INFO] agent: requesting shutdown
2016/06/23 07:45:01 [INFO] consul: shutting down server
2016/06/23 07:45:01 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:01 [INFO] raft: Node at 127.0.0.1:18023 [Candidate] entering Candidate state
2016/06/23 07:45:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:02 [INFO] agent: shutdown complete
2016/06/23 07:45:02 [DEBUG] http: Shutting down http server (127.0.0.1:18823)
--- PASS: TestHTTPAgentPassCheck (2.02s)
=== RUN   TestHTTPAgentWarnCheck
2016/06/23 07:45:03 [INFO] raft: Node at 127.0.0.1:18024 [Follower] entering Follower state
2016/06/23 07:45:03 [INFO] serf: EventMemberJoin: Node 24 127.0.0.1
2016/06/23 07:45:03 [INFO] consul: adding LAN server Node 24 (Addr: 127.0.0.1:18024) (DC: dc1)
2016/06/23 07:45:03 [INFO] serf: EventMemberJoin: Node 24.dc1 127.0.0.1
2016/06/23 07:45:03 [INFO] consul: adding WAN server Node 24.dc1 (Addr: 127.0.0.1:18024) (DC: dc1)
2016/06/23 07:45:03 [DEBUG] agent: Check 'test' status is now warning
2016/06/23 07:45:03 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:03 [INFO] agent: requesting shutdown
2016/06/23 07:45:03 [INFO] consul: shutting down server
2016/06/23 07:45:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:03 [INFO] raft: Node at 127.0.0.1:18024 [Candidate] entering Candidate state
2016/06/23 07:45:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:04 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:04 [INFO] agent: shutdown complete
2016/06/23 07:45:04 [DEBUG] http: Shutting down http server (127.0.0.1:18824)
--- PASS: TestHTTPAgentWarnCheck (1.76s)
=== RUN   TestHTTPAgentFailCheck
2016/06/23 07:45:05 [INFO] raft: Node at 127.0.0.1:18025 [Follower] entering Follower state
2016/06/23 07:45:05 [INFO] serf: EventMemberJoin: Node 25 127.0.0.1
2016/06/23 07:45:05 [INFO] consul: adding LAN server Node 25 (Addr: 127.0.0.1:18025) (DC: dc1)
2016/06/23 07:45:05 [INFO] serf: EventMemberJoin: Node 25.dc1 127.0.0.1
2016/06/23 07:45:05 [INFO] consul: adding WAN server Node 25.dc1 (Addr: 127.0.0.1:18025) (DC: dc1)
2016/06/23 07:45:05 [DEBUG] agent: Check 'test' status is now critical
2016/06/23 07:45:05 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:05 [INFO] agent: requesting shutdown
2016/06/23 07:45:05 [INFO] consul: shutting down server
2016/06/23 07:45:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:05 [INFO] raft: Node at 127.0.0.1:18025 [Candidate] entering Candidate state
2016/06/23 07:45:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:05 [INFO] agent: shutdown complete
2016/06/23 07:45:05 [DEBUG] http: Shutting down http server (127.0.0.1:18825)
--- PASS: TestHTTPAgentFailCheck (1.09s)
=== RUN   TestHTTPAgentUpdateCheck
2016/06/23 07:45:06 [INFO] raft: Node at 127.0.0.1:18026 [Follower] entering Follower state
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 26 127.0.0.1
2016/06/23 07:45:06 [INFO] consul: adding LAN server Node 26 (Addr: 127.0.0.1:18026) (DC: dc1)
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 26.dc1 127.0.0.1
2016/06/23 07:45:06 [INFO] consul: adding WAN server Node 26.dc1 (Addr: 127.0.0.1:18026) (DC: dc1)
2016/06/23 07:45:06 [DEBUG] agent: Check 'test' status is now passing
2016/06/23 07:45:06 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:06 [DEBUG] agent: Check 'test' status is now critical
2016/06/23 07:45:06 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:06 [DEBUG] agent: Check 'test' status is now warning
2016/06/23 07:45:06 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:06 [INFO] raft: Node at 127.0.0.1:18026 [Candidate] entering Candidate state
2016/06/23 07:45:06 [DEBUG] agent: Check 'test' status is now passing
2016/06/23 07:45:06 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:06 [INFO] agent: requesting shutdown
2016/06/23 07:45:06 [INFO] consul: shutting down server
2016/06/23 07:45:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:07 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:07 [INFO] agent: shutdown complete
2016/06/23 07:45:07 [DEBUG] http: Shutting down http server (127.0.0.1:18826)
--- PASS: TestHTTPAgentUpdateCheck (1.35s)
=== RUN   TestHTTPAgentRegisterService
2016/06/23 07:45:07 [INFO] raft: Node at 127.0.0.1:18027 [Follower] entering Follower state
2016/06/23 07:45:07 [INFO] serf: EventMemberJoin: Node 27 127.0.0.1
2016/06/23 07:45:07 [INFO] serf: EventMemberJoin: Node 27.dc1 127.0.0.1
2016/06/23 07:45:07 [INFO] consul: adding LAN server Node 27 (Addr: 127.0.0.1:18027) (DC: dc1)
2016/06/23 07:45:07 [INFO] consul: adding WAN server Node 27.dc1 (Addr: 127.0.0.1:18027) (DC: dc1)
2016/06/23 07:45:07 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:07 [INFO] agent: requesting shutdown
2016/06/23 07:45:07 [INFO] consul: shutting down server
2016/06/23 07:45:07 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:08 [INFO] raft: Node at 127.0.0.1:18027 [Candidate] entering Candidate state
2016/06/23 07:45:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:08 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:08 [INFO] agent: shutdown complete
2016/06/23 07:45:08 [DEBUG] http: Shutting down http server (127.0.0.1:18827)
--- PASS: TestHTTPAgentRegisterService (1.36s)
=== RUN   TestHTTPAgentDeregisterService
2016/06/23 07:45:09 [INFO] raft: Node at 127.0.0.1:18028 [Follower] entering Follower state
2016/06/23 07:45:09 [INFO] serf: EventMemberJoin: Node 28 127.0.0.1
2016/06/23 07:45:09 [INFO] consul: adding LAN server Node 28 (Addr: 127.0.0.1:18028) (DC: dc1)
2016/06/23 07:45:09 [INFO] serf: EventMemberJoin: Node 28.dc1 127.0.0.1
2016/06/23 07:45:09 [DEBUG] agent: removed service "test"
2016/06/23 07:45:09 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:09 [INFO] agent: requesting shutdown
2016/06/23 07:45:09 [INFO] consul: shutting down server
2016/06/23 07:45:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:09 [INFO] consul: adding WAN server Node 28.dc1 (Addr: 127.0.0.1:18028) (DC: dc1)
2016/06/23 07:45:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:09 [INFO] raft: Node at 127.0.0.1:18028 [Candidate] entering Candidate state
2016/06/23 07:45:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:09 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:09 [INFO] agent: shutdown complete
2016/06/23 07:45:09 [DEBUG] http: Shutting down http server (127.0.0.1:18828)
--- PASS: TestHTTPAgentDeregisterService (1.02s)
=== RUN   TestHTTPAgent_ServiceMaintenanceEndpoint_BadRequest
2016/06/23 07:45:10 [INFO] raft: Node at 127.0.0.1:18029 [Follower] entering Follower state
2016/06/23 07:45:10 [INFO] serf: EventMemberJoin: Node 29 127.0.0.1
2016/06/23 07:45:10 [INFO] consul: adding LAN server Node 29 (Addr: 127.0.0.1:18029) (DC: dc1)
2016/06/23 07:45:10 [INFO] serf: EventMemberJoin: Node 29.dc1 127.0.0.1
2016/06/23 07:45:10 [INFO] consul: adding WAN server Node 29.dc1 (Addr: 127.0.0.1:18029) (DC: dc1)
2016/06/23 07:45:10 [INFO] agent: requesting shutdown
2016/06/23 07:45:10 [INFO] consul: shutting down server
2016/06/23 07:45:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:10 [INFO] raft: Node at 127.0.0.1:18029 [Candidate] entering Candidate state
2016/06/23 07:45:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:10 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:10 [INFO] agent: shutdown complete
2016/06/23 07:45:10 [DEBUG] http: Shutting down http server (127.0.0.1:18829)
--- PASS: TestHTTPAgent_ServiceMaintenanceEndpoint_BadRequest (1.01s)
=== RUN   TestHTTPAgent_EnableServiceMaintenance
2016/06/23 07:45:11 [INFO] raft: Node at 127.0.0.1:18030 [Follower] entering Follower state
2016/06/23 07:45:11 [INFO] serf: EventMemberJoin: Node 30 127.0.0.1
2016/06/23 07:45:11 [INFO] consul: adding LAN server Node 30 (Addr: 127.0.0.1:18030) (DC: dc1)
2016/06/23 07:45:11 [INFO] serf: EventMemberJoin: Node 30.dc1 127.0.0.1
2016/06/23 07:45:11 [INFO] consul: adding WAN server Node 30.dc1 (Addr: 127.0.0.1:18030) (DC: dc1)
2016/06/23 07:45:11 [INFO] agent: Service "test" entered maintenance mode
2016/06/23 07:45:11 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:11 [INFO] agent: requesting shutdown
2016/06/23 07:45:11 [INFO] consul: shutting down server
2016/06/23 07:45:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:11 [INFO] raft: Node at 127.0.0.1:18030 [Candidate] entering Candidate state
2016/06/23 07:45:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:11 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:11 [INFO] agent: shutdown complete
2016/06/23 07:45:11 [DEBUG] http: Shutting down http server (127.0.0.1:18830)
--- PASS: TestHTTPAgent_EnableServiceMaintenance (1.05s)
=== RUN   TestHTTPAgent_DisableServiceMaintenance
2016/06/23 07:45:12 [INFO] raft: Node at 127.0.0.1:18031 [Follower] entering Follower state
2016/06/23 07:45:12 [INFO] serf: EventMemberJoin: Node 31 127.0.0.1
2016/06/23 07:45:12 [INFO] consul: adding LAN server Node 31 (Addr: 127.0.0.1:18031) (DC: dc1)
2016/06/23 07:45:12 [INFO] serf: EventMemberJoin: Node 31.dc1 127.0.0.1
2016/06/23 07:45:12 [INFO] consul: adding WAN server Node 31.dc1 (Addr: 127.0.0.1:18031) (DC: dc1)
2016/06/23 07:45:12 [INFO] agent: Service "test" entered maintenance mode
2016/06/23 07:45:12 [DEBUG] agent: removed check "_service_maintenance:test"
2016/06/23 07:45:12 [INFO] agent: Service "test" left maintenance mode
2016/06/23 07:45:12 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:12 [INFO] agent: requesting shutdown
2016/06/23 07:45:12 [INFO] consul: shutting down server
2016/06/23 07:45:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:12 [INFO] raft: Node at 127.0.0.1:18031 [Candidate] entering Candidate state
2016/06/23 07:45:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:12 [INFO] agent: shutdown complete
2016/06/23 07:45:12 [DEBUG] http: Shutting down http server (127.0.0.1:18831)
--- PASS: TestHTTPAgent_DisableServiceMaintenance (1.17s)
=== RUN   TestHTTPAgent_NodeMaintenanceEndpoint_BadRequest
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:18032 [Follower] entering Follower state
2016/06/23 07:45:13 [INFO] serf: EventMemberJoin: Node 32 127.0.0.1
2016/06/23 07:45:13 [INFO] consul: adding LAN server Node 32 (Addr: 127.0.0.1:18032) (DC: dc1)
2016/06/23 07:45:13 [INFO] serf: EventMemberJoin: Node 32.dc1 127.0.0.1
2016/06/23 07:45:13 [INFO] consul: adding WAN server Node 32.dc1 (Addr: 127.0.0.1:18032) (DC: dc1)
2016/06/23 07:45:13 [INFO] agent: requesting shutdown
2016/06/23 07:45:13 [INFO] consul: shutting down server
2016/06/23 07:45:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:18032 [Candidate] entering Candidate state
2016/06/23 07:45:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:13 [INFO] agent: shutdown complete
2016/06/23 07:45:13 [DEBUG] http: Shutting down http server (127.0.0.1:18832)
--- PASS: TestHTTPAgent_NodeMaintenanceEndpoint_BadRequest (1.00s)
=== RUN   TestHTTPAgent_EnableNodeMaintenance
2016/06/23 07:45:14 [INFO] raft: Node at 127.0.0.1:18033 [Follower] entering Follower state
2016/06/23 07:45:14 [INFO] serf: EventMemberJoin: Node 33 127.0.0.1
2016/06/23 07:45:14 [INFO] consul: adding LAN server Node 33 (Addr: 127.0.0.1:18033) (DC: dc1)
2016/06/23 07:45:14 [INFO] serf: EventMemberJoin: Node 33.dc1 127.0.0.1
2016/06/23 07:45:14 [INFO] consul: adding WAN server Node 33.dc1 (Addr: 127.0.0.1:18033) (DC: dc1)
2016/06/23 07:45:14 [INFO] agent: Node entered maintenance mode
2016/06/23 07:45:14 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:14 [INFO] agent: requesting shutdown
2016/06/23 07:45:14 [INFO] consul: shutting down server
2016/06/23 07:45:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:14 [INFO] raft: Node at 127.0.0.1:18033 [Candidate] entering Candidate state
2016/06/23 07:45:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:15 [INFO] agent: shutdown complete
2016/06/23 07:45:15 [DEBUG] http: Shutting down http server (127.0.0.1:18833)
--- PASS: TestHTTPAgent_EnableNodeMaintenance (1.62s)
=== RUN   TestHTTPAgent_DisableNodeMaintenance
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:18034 [Follower] entering Follower state
2016/06/23 07:45:16 [INFO] serf: EventMemberJoin: Node 34 127.0.0.1
2016/06/23 07:45:16 [INFO] consul: adding LAN server Node 34 (Addr: 127.0.0.1:18034) (DC: dc1)
2016/06/23 07:45:16 [INFO] serf: EventMemberJoin: Node 34.dc1 127.0.0.1
2016/06/23 07:45:16 [INFO] consul: adding WAN server Node 34.dc1 (Addr: 127.0.0.1:18034) (DC: dc1)
2016/06/23 07:45:16 [INFO] agent: Node entered maintenance mode
2016/06/23 07:45:16 [DEBUG] agent: removed check "_node_maintenance"
2016/06/23 07:45:16 [INFO] agent: Node left maintenance mode
2016/06/23 07:45:16 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:16 [INFO] agent: requesting shutdown
2016/06/23 07:45:16 [INFO] consul: shutting down server
2016/06/23 07:45:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:18034 [Candidate] entering Candidate state
2016/06/23 07:45:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:16 [INFO] agent: shutdown complete
2016/06/23 07:45:16 [DEBUG] http: Shutting down http server (127.0.0.1:18834)
--- PASS: TestHTTPAgent_DisableNodeMaintenance (1.35s)
=== RUN   TestHTTPAgentRegisterServiceCheck
2016/06/23 07:45:17 [INFO] raft: Node at 127.0.0.1:18035 [Follower] entering Follower state
2016/06/23 07:45:17 [INFO] serf: EventMemberJoin: Node 35 127.0.0.1
2016/06/23 07:45:17 [INFO] consul: adding LAN server Node 35 (Addr: 127.0.0.1:18035) (DC: dc1)
2016/06/23 07:45:17 [INFO] serf: EventMemberJoin: Node 35.dc1 127.0.0.1
2016/06/23 07:45:17 [INFO] consul: adding WAN server Node 35.dc1 (Addr: 127.0.0.1:18035) (DC: dc1)
2016/06/23 07:45:17 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:17 [ERR] agent: failed to sync changes: No cluster leader
2016/06/23 07:45:17 [INFO] agent: requesting shutdown
2016/06/23 07:45:17 [INFO] consul: shutting down server
2016/06/23 07:45:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:17 [INFO] raft: Node at 127.0.0.1:18035 [Candidate] entering Candidate state
2016/06/23 07:45:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:18 [INFO] agent: shutdown complete
2016/06/23 07:45:18 [DEBUG] http: Shutting down http server (127.0.0.1:18835)
--- PASS: TestHTTPAgentRegisterServiceCheck (1.62s)
=== RUN   TestAgentStartStop
2016/06/23 07:45:19 [INFO] raft: Node at 127.0.0.1:18036 [Follower] entering Follower state
2016/06/23 07:45:19 [INFO] serf: EventMemberJoin: Node 36 127.0.0.1
2016/06/23 07:45:19 [INFO] consul: adding LAN server Node 36 (Addr: 127.0.0.1:18036) (DC: dc1)
2016/06/23 07:45:19 [INFO] serf: EventMemberJoin: Node 36.dc1 127.0.0.1
2016/06/23 07:45:19 [INFO] consul: adding WAN server Node 36.dc1 (Addr: 127.0.0.1:18036) (DC: dc1)
2016/06/23 07:45:19 [INFO] consul: server starting leave
2016/06/23 07:45:19 [INFO] serf: EventMemberLeave: Node 36.dc1 127.0.0.1
2016/06/23 07:45:19 [INFO] serf: EventMemberLeave: Node 36 127.0.0.1
2016/06/23 07:45:19 [INFO] agent: requesting shutdown
2016/06/23 07:45:19 [INFO] consul: shutting down server
2016/06/23 07:45:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:19 [INFO] raft: Node at 127.0.0.1:18036 [Candidate] entering Candidate state
2016/06/23 07:45:20 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:20 [INFO] agent: shutdown complete
--- PASS: TestAgentStartStop (1.65s)
=== RUN   TestAgent_RPCPing
2016/06/23 07:45:20 [INFO] raft: Node at 127.0.0.1:18037 [Follower] entering Follower state
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 37 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding LAN server Node 37 (Addr: 127.0.0.1:18037) (DC: dc1)
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 37.dc1 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding WAN server Node 37.dc1 (Addr: 127.0.0.1:18037) (DC: dc1)
2016/06/23 07:45:20 [INFO] agent: requesting shutdown
2016/06/23 07:45:20 [INFO] consul: shutting down server
2016/06/23 07:45:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:20 [INFO] raft: Node at 127.0.0.1:18037 [Candidate] entering Candidate state
2016/06/23 07:45:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:21 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:21 [INFO] agent: shutdown complete
--- PASS: TestAgent_RPCPing (1.27s)
=== RUN   TestAgent_CheckAdvertiseAddrsSettings
2016/06/23 07:45:21 [INFO] raft: Node at 127.0.0.44:1235 [Follower] entering Follower state
2016/06/23 07:45:21 [INFO] serf: EventMemberJoin: Node 38 127.0.0.42
2016/06/23 07:45:21 [INFO] consul: adding LAN server Node 38 (Addr: 127.0.0.42:18038) (DC: dc1)
2016/06/23 07:45:21 [INFO] serf: EventMemberJoin: Node 38.dc1 127.0.0.43
2016/06/23 07:45:21 [INFO] agent: requesting shutdown
2016/06/23 07:45:21 [INFO] consul: shutting down server
2016/06/23 07:45:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:21 [INFO] consul: adding WAN server Node 38.dc1 (Addr: 127.0.0.43:18038) (DC: dc1)
2016/06/23 07:45:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:22 [INFO] raft: Node at 127.0.0.44:1235 [Candidate] entering Candidate state
2016/06/23 07:45:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:22 [INFO] agent: shutdown complete
--- PASS: TestAgent_CheckAdvertiseAddrsSettings (1.28s)
=== RUN   TestAgent_AddService
2016/06/23 07:45:22 [INFO] raft: Node at 127.0.0.1:18039 [Follower] entering Follower state
2016/06/23 07:45:22 [INFO] serf: EventMemberJoin: Node 39 127.0.0.1
2016/06/23 07:45:22 [INFO] consul: adding LAN server Node 39 (Addr: 127.0.0.1:18039) (DC: dc1)
2016/06/23 07:45:22 [INFO] serf: EventMemberJoin: Node 39.dc1 127.0.0.1
2016/06/23 07:45:22 [INFO] consul: adding WAN server Node 39.dc1 (Addr: 127.0.0.1:18039) (DC: dc1)
2016/06/23 07:45:22 [INFO] agent: requesting shutdown
2016/06/23 07:45:22 [INFO] consul: shutting down server
2016/06/23 07:45:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:23 [INFO] raft: Node at 127.0.0.1:18039 [Candidate] entering Candidate state
2016/06/23 07:45:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:23 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddService (0.81s)
=== RUN   TestAgent_RemoveService
2016/06/23 07:45:23 [INFO] raft: Node at 127.0.0.1:18040 [Follower] entering Follower state
2016/06/23 07:45:23 [INFO] serf: EventMemberJoin: Node 40 127.0.0.1
2016/06/23 07:45:23 [INFO] consul: adding LAN server Node 40 (Addr: 127.0.0.1:18040) (DC: dc1)
2016/06/23 07:45:23 [INFO] serf: EventMemberJoin: Node 40.dc1 127.0.0.1
2016/06/23 07:45:23 [INFO] consul: adding WAN server Node 40.dc1 (Addr: 127.0.0.1:18040) (DC: dc1)
2016/06/23 07:45:23 [DEBUG] agent: removed service "redis"
2016/06/23 07:45:23 [DEBUG] agent: removed check "service:memcache"
2016/06/23 07:45:23 [DEBUG] agent: removed check "check2"
2016/06/23 07:45:23 [DEBUG] agent: removed service "memcache"
2016/06/23 07:45:23 [DEBUG] agent: removed check "service:redis:1"
2016/06/23 07:45:23 [DEBUG] agent: removed check "service:redis:2"
2016/06/23 07:45:23 [DEBUG] agent: removed service "redis"
2016/06/23 07:45:23 [INFO] agent: requesting shutdown
2016/06/23 07:45:23 [INFO] consul: shutting down server
2016/06/23 07:45:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:23 [INFO] raft: Node at 127.0.0.1:18040 [Candidate] entering Candidate state
2016/06/23 07:45:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:24 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:24 [INFO] agent: shutdown complete
--- PASS: TestAgent_RemoveService (0.85s)
=== RUN   TestAgent_AddCheck
2016/06/23 07:45:24 [INFO] raft: Node at 127.0.0.1:18041 [Follower] entering Follower state
2016/06/23 07:45:24 [INFO] serf: EventMemberJoin: Node 41 127.0.0.1
2016/06/23 07:45:24 [INFO] consul: adding LAN server Node 41 (Addr: 127.0.0.1:18041) (DC: dc1)
2016/06/23 07:45:24 [INFO] serf: EventMemberJoin: Node 41.dc1 127.0.0.1
2016/06/23 07:45:24 [INFO] consul: adding WAN server Node 41.dc1 (Addr: 127.0.0.1:18041) (DC: dc1)
2016/06/23 07:45:24 [INFO] agent: requesting shutdown
2016/06/23 07:45:24 [INFO] consul: shutting down server
2016/06/23 07:45:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:24 [DEBUG] agent: pausing 14.701588843s before first invocation of exit 0
2016/06/23 07:45:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:24 [INFO] raft: Node at 127.0.0.1:18041 [Candidate] entering Candidate state
2016/06/23 07:45:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:25 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddCheck (0.90s)
=== RUN   TestAgent_AddCheck_StartPassing
2016/06/23 07:45:25 [INFO] raft: Node at 127.0.0.1:18042 [Follower] entering Follower state
2016/06/23 07:45:25 [INFO] serf: EventMemberJoin: Node 42 127.0.0.1
2016/06/23 07:45:25 [INFO] consul: adding LAN server Node 42 (Addr: 127.0.0.1:18042) (DC: dc1)
2016/06/23 07:45:25 [INFO] serf: EventMemberJoin: Node 42.dc1 127.0.0.1
2016/06/23 07:45:25 [INFO] agent: requesting shutdown
2016/06/23 07:45:25 [INFO] consul: shutting down server
2016/06/23 07:45:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:25 [DEBUG] agent: pausing 8.02263685s before first invocation of exit 0
2016/06/23 07:45:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:25 [INFO] raft: Node at 127.0.0.1:18042 [Candidate] entering Candidate state
2016/06/23 07:45:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:26 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddCheck_StartPassing (0.92s)
=== RUN   TestAgent_AddCheck_MinInterval
2016/06/23 07:45:26 [INFO] raft: Node at 127.0.0.1:18043 [Follower] entering Follower state
2016/06/23 07:45:26 [INFO] serf: EventMemberJoin: Node 43 127.0.0.1
2016/06/23 07:45:26 [INFO] consul: adding LAN server Node 43 (Addr: 127.0.0.1:18043) (DC: dc1)
2016/06/23 07:45:26 [INFO] serf: EventMemberJoin: Node 43.dc1 127.0.0.1
2016/06/23 07:45:26 [WARN] agent: check 'mem' has interval below minimum of 1s
2016/06/23 07:45:26 [INFO] agent: requesting shutdown
2016/06/23 07:45:26 [INFO] consul: shutting down server
2016/06/23 07:45:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:26 [DEBUG] agent: pausing 764.549766ms before first invocation of exit 0
2016/06/23 07:45:26 [INFO] consul: adding WAN server Node 43.dc1 (Addr: 127.0.0.1:18043) (DC: dc1)
2016/06/23 07:45:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:26 [INFO] raft: Node at 127.0.0.1:18043 [Candidate] entering Candidate state
2016/06/23 07:45:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:27 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddCheck_MinInterval (0.92s)
=== RUN   TestAgent_AddCheck_MissingService
2016/06/23 07:45:27 [INFO] raft: Node at 127.0.0.1:18044 [Follower] entering Follower state
2016/06/23 07:45:27 [INFO] serf: EventMemberJoin: Node 44 127.0.0.1
2016/06/23 07:45:27 [INFO] consul: adding LAN server Node 44 (Addr: 127.0.0.1:18044) (DC: dc1)
2016/06/23 07:45:27 [INFO] serf: EventMemberJoin: Node 44.dc1 127.0.0.1
2016/06/23 07:45:27 [INFO] agent: requesting shutdown
2016/06/23 07:45:27 [INFO] consul: shutting down server
2016/06/23 07:45:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:27 [INFO] raft: Node at 127.0.0.1:18044 [Candidate] entering Candidate state
2016/06/23 07:45:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:27 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddCheck_MissingService (0.90s)
=== RUN   TestAgent_AddCheck_RestoreState
2016/06/23 07:45:28 [INFO] raft: Node at 127.0.0.1:18045 [Follower] entering Follower state
2016/06/23 07:45:28 [INFO] serf: EventMemberJoin: Node 45 127.0.0.1
2016/06/23 07:45:28 [INFO] consul: adding LAN server Node 45 (Addr: 127.0.0.1:18045) (DC: dc1)
2016/06/23 07:45:28 [INFO] serf: EventMemberJoin: Node 45.dc1 127.0.0.1
2016/06/23 07:45:28 [INFO] agent: requesting shutdown
2016/06/23 07:45:28 [INFO] consul: shutting down server
2016/06/23 07:45:28 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:28 [INFO] raft: Node at 127.0.0.1:18045 [Candidate] entering Candidate state
2016/06/23 07:45:28 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:29 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:29 [INFO] agent: shutdown complete
--- PASS: TestAgent_AddCheck_RestoreState (1.13s)
=== RUN   TestAgent_RemoveCheck
2016/06/23 07:45:29 [INFO] raft: Node at 127.0.0.1:18046 [Follower] entering Follower state
2016/06/23 07:45:29 [INFO] serf: EventMemberJoin: Node 46 127.0.0.1
2016/06/23 07:45:29 [INFO] consul: adding LAN server Node 46 (Addr: 127.0.0.1:18046) (DC: dc1)
2016/06/23 07:45:29 [INFO] serf: EventMemberJoin: Node 46.dc1 127.0.0.1
2016/06/23 07:45:29 [DEBUG] agent: removed check "mem"
2016/06/23 07:45:29 [INFO] consul: adding WAN server Node 46.dc1 (Addr: 127.0.0.1:18046) (DC: dc1)
2016/06/23 07:45:29 [DEBUG] agent: removed check "mem"
2016/06/23 07:45:29 [INFO] agent: requesting shutdown
2016/06/23 07:45:29 [INFO] consul: shutting down server
2016/06/23 07:45:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:29 [DEBUG] agent: pausing 2.931220921s before first invocation of exit 0
2016/06/23 07:45:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:29 [INFO] raft: Node at 127.0.0.1:18046 [Candidate] entering Candidate state
2016/06/23 07:45:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:30 [INFO] agent: shutdown complete
--- PASS: TestAgent_RemoveCheck (0.99s)
=== RUN   TestAgent_UpdateCheck
2016/06/23 07:45:30 [INFO] raft: Node at 127.0.0.1:18047 [Follower] entering Follower state
2016/06/23 07:45:30 [INFO] serf: EventMemberJoin: Node 47 127.0.0.1
2016/06/23 07:45:30 [INFO] consul: adding LAN server Node 47 (Addr: 127.0.0.1:18047) (DC: dc1)
2016/06/23 07:45:30 [INFO] serf: EventMemberJoin: Node 47.dc1 127.0.0.1
2016/06/23 07:45:30 [DEBUG] agent: Check 'mem' status is now passing
2016/06/23 07:45:30 [INFO] consul: adding WAN server Node 47.dc1 (Addr: 127.0.0.1:18047) (DC: dc1)
2016/06/23 07:45:30 [INFO] agent: requesting shutdown
2016/06/23 07:45:30 [INFO] consul: shutting down server
2016/06/23 07:45:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:30 [INFO] raft: Node at 127.0.0.1:18047 [Candidate] entering Candidate state
2016/06/23 07:45:31 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:31 [INFO] agent: shutdown complete
--- PASS: TestAgent_UpdateCheck (1.38s)
=== RUN   TestAgent_ConsulService
2016/06/23 07:45:31 [INFO] raft: Node at 127.0.0.1:18048 [Follower] entering Follower state
2016/06/23 07:45:31 [INFO] serf: EventMemberJoin: Node 48 127.0.0.1
2016/06/23 07:45:31 [INFO] consul: adding LAN server Node 48 (Addr: 127.0.0.1:18048) (DC: dc1)
2016/06/23 07:45:31 [INFO] serf: EventMemberJoin: Node 48.dc1 127.0.0.1
2016/06/23 07:45:31 [INFO] consul: adding WAN server Node 48.dc1 (Addr: 127.0.0.1:18048) (DC: dc1)
2016/06/23 07:45:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:31 [INFO] raft: Node at 127.0.0.1:18048 [Candidate] entering Candidate state
2016/06/23 07:45:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:32 [DEBUG] raft: Vote granted from 127.0.0.1:18048. Tally: 1
2016/06/23 07:45:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:32 [INFO] raft: Node at 127.0.0.1:18048 [Leader] entering Leader state
2016/06/23 07:45:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:32 [INFO] consul: New leader elected: Node 48
2016/06/23 07:45:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:32 [DEBUG] raft: Node 127.0.0.1:18048 updated peer set (2): [127.0.0.1:18048]
2016/06/23 07:45:32 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:33 [INFO] consul: member 'Node 48' joined, marking health alive
2016/06/23 07:45:33 [INFO] agent: Synced service 'consul'
2016/06/23 07:45:33 [INFO] agent: requesting shutdown
2016/06/23 07:45:33 [INFO] consul: shutting down server
2016/06/23 07:45:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:33 [INFO] agent: shutdown complete
--- PASS: TestAgent_ConsulService (2.56s)
=== RUN   TestAgent_PersistService
2016/06/23 07:45:33 [INFO] serf: EventMemberJoin: Node 49 127.0.0.1
2016/06/23 07:45:33 [INFO] agent: requesting shutdown
2016/06/23 07:45:33 [INFO] consul: shutting down client
2016/06/23 07:45:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:34 [INFO] agent: shutdown complete
2016/06/23 07:45:34 [INFO] serf: EventMemberJoin: Node 49 127.0.0.1
2016/06/23 07:45:34 [WARN] serf: Failed to re-join any previously known node
2016/06/23 07:45:34 [DEBUG] agent: restored service definition "redis" from "/tmp/agent372134534/services/86a1b907d54bf7010394bf316e183e67"
2016/06/23 07:45:34 [INFO] agent: requesting shutdown
2016/06/23 07:45:34 [INFO] consul: shutting down client
2016/06/23 07:45:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:34 [INFO] agent: shutdown complete
--- PASS: TestAgent_PersistService (0.19s)
=== RUN   TestAgent_persistedService_compat
2016/06/23 07:45:34 [INFO] raft: Node at 127.0.0.1:18050 [Follower] entering Follower state
2016/06/23 07:45:34 [INFO] serf: EventMemberJoin: Node 50 127.0.0.1
2016/06/23 07:45:34 [INFO] consul: adding LAN server Node 50 (Addr: 127.0.0.1:18050) (DC: dc1)
2016/06/23 07:45:34 [INFO] serf: EventMemberJoin: Node 50.dc1 127.0.0.1
2016/06/23 07:45:34 [INFO] consul: adding WAN server Node 50.dc1 (Addr: 127.0.0.1:18050) (DC: dc1)
2016/06/23 07:45:34 [DEBUG] agent: restored service definition "redis" from "/tmp/agent684281389/services/86a1b907d54bf7010394bf316e183e67"
2016/06/23 07:45:34 [INFO] agent: requesting shutdown
2016/06/23 07:45:34 [INFO] consul: shutting down server
2016/06/23 07:45:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:35 [INFO] raft: Node at 127.0.0.1:18050 [Candidate] entering Candidate state
2016/06/23 07:45:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:35 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:35 [INFO] agent: shutdown complete
--- PASS: TestAgent_persistedService_compat (1.54s)
=== RUN   TestAgent_PurgeService
2016/06/23 07:45:36 [INFO] raft: Node at 127.0.0.1:18051 [Follower] entering Follower state
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 51 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding LAN server Node 51 (Addr: 127.0.0.1:18051) (DC: dc1)
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 51.dc1 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding WAN server Node 51.dc1 (Addr: 127.0.0.1:18051) (DC: dc1)
2016/06/23 07:45:36 [DEBUG] agent: removed service "redis"
2016/06/23 07:45:36 [DEBUG] agent: removed service "redis"
2016/06/23 07:45:36 [INFO] agent: requesting shutdown
2016/06/23 07:45:36 [INFO] consul: shutting down server
2016/06/23 07:45:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:36 [INFO] raft: Node at 127.0.0.1:18051 [Candidate] entering Candidate state
2016/06/23 07:45:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:36 [INFO] agent: shutdown complete
--- PASS: TestAgent_PurgeService (1.08s)
=== RUN   TestAgent_PurgeServiceOnDuplicate
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 52 127.0.0.1
2016/06/23 07:45:36 [INFO] agent: requesting shutdown
2016/06/23 07:45:36 [INFO] consul: shutting down client
2016/06/23 07:45:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:36 [INFO] agent: shutdown complete
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 52 127.0.0.1
2016/06/23 07:45:36 [WARN] serf: Failed to re-join any previously known node
2016/06/23 07:45:36 [DEBUG] agent: service "redis" exists, not restoring from "/tmp/agent971988199/services/86a1b907d54bf7010394bf316e183e67"
2016/06/23 07:45:36 [INFO] agent: requesting shutdown
2016/06/23 07:45:36 [INFO] consul: shutting down client
2016/06/23 07:45:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:36 [INFO] agent: shutdown complete
--- PASS: TestAgent_PurgeServiceOnDuplicate (0.19s)
=== RUN   TestAgent_PersistCheck
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 53 127.0.0.1
2016/06/23 07:45:36 [DEBUG] agent: pausing 4.996996313s before first invocation of /bin/true
2016/06/23 07:45:36 [DEBUG] agent: pausing 5.419466061s before first invocation of /bin/true
2016/06/23 07:45:36 [DEBUG] agent: pausing 760.707974ms before first invocation of /bin/true
2016/06/23 07:45:36 [INFO] agent: requesting shutdown
2016/06/23 07:45:36 [INFO] consul: shutting down client
2016/06/23 07:45:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [INFO] agent: shutdown complete
2016/06/23 07:45:37 [INFO] serf: EventMemberJoin: Node 53 127.0.0.1
2016/06/23 07:45:37 [WARN] serf: Failed to re-join any previously known node
2016/06/23 07:45:37 [DEBUG] agent: restored health check "mem" from "/tmp/agent044344602/checks/afc4fc7e48a0710a1dc94ef3e8bc5764"
2016/06/23 07:45:37 [INFO] agent: requesting shutdown
2016/06/23 07:45:37 [INFO] consul: shutting down client
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [DEBUG] agent: pausing 4.067279579s before first invocation of /bin/true
2016/06/23 07:45:37 [INFO] agent: shutdown complete
--- PASS: TestAgent_PersistCheck (0.21s)
=== RUN   TestAgent_PurgeCheck
2016/06/23 07:45:37 [INFO] raft: Node at 127.0.0.1:18054 [Follower] entering Follower state
2016/06/23 07:45:37 [INFO] serf: EventMemberJoin: Node 54 127.0.0.1
2016/06/23 07:45:37 [INFO] consul: adding LAN server Node 54 (Addr: 127.0.0.1:18054) (DC: dc1)
2016/06/23 07:45:37 [INFO] serf: EventMemberJoin: Node 54.dc1 127.0.0.1
2016/06/23 07:45:37 [INFO] consul: adding WAN server Node 54.dc1 (Addr: 127.0.0.1:18054) (DC: dc1)
2016/06/23 07:45:37 [DEBUG] agent: removed check "mem"
2016/06/23 07:45:37 [DEBUG] agent: removed check "mem"
2016/06/23 07:45:37 [INFO] agent: requesting shutdown
2016/06/23 07:45:37 [INFO] consul: shutting down server
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:37 [INFO] raft: Node at 127.0.0.1:18054 [Candidate] entering Candidate state
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:38 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:38 [INFO] agent: shutdown complete
--- PASS: TestAgent_PurgeCheck (1.05s)
=== RUN   TestAgent_PurgeCheckOnDuplicate
2016/06/23 07:45:38 [INFO] serf: EventMemberJoin: Node 55 127.0.0.1
2016/06/23 07:45:38 [INFO] agent: requesting shutdown
2016/06/23 07:45:38 [INFO] consul: shutting down client
2016/06/23 07:45:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:38 [INFO] agent: shutdown complete
2016/06/23 07:45:38 [INFO] serf: EventMemberJoin: Node 55 127.0.0.1
2016/06/23 07:45:38 [WARN] serf: Failed to re-join any previously known node
2016/06/23 07:45:38 [DEBUG] agent: pausing 148.991425ms before first invocation of /bin/check-redis.py
2016/06/23 07:45:38 [DEBUG] agent: check "mem" exists, not restoring from "/tmp/agent170527580/checks/afc4fc7e48a0710a1dc94ef3e8bc5764"
2016/06/23 07:45:38 [INFO] agent: requesting shutdown
2016/06/23 07:45:38 [INFO] consul: shutting down client
2016/06/23 07:45:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:38 [INFO] agent: shutdown complete
--- PASS: TestAgent_PurgeCheckOnDuplicate (0.15s)
=== RUN   TestAgent_loadChecks_token
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:18056 [Follower] entering Follower state
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 56 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding LAN server Node 56 (Addr: 127.0.0.1:18056) (DC: dc1)
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 56.dc1 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding WAN server Node 56.dc1 (Addr: 127.0.0.1:18056) (DC: dc1)
2016/06/23 07:45:39 [INFO] agent: requesting shutdown
2016/06/23 07:45:39 [INFO] consul: shutting down server
2016/06/23 07:45:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:18056 [Candidate] entering Candidate state
2016/06/23 07:45:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:39 [INFO] agent: shutdown complete
--- PASS: TestAgent_loadChecks_token (1.44s)
=== RUN   TestAgent_unloadChecks
2016/06/23 07:45:40 [INFO] raft: Node at 127.0.0.1:18057 [Follower] entering Follower state
2016/06/23 07:45:40 [INFO] serf: EventMemberJoin: Node 57 127.0.0.1
2016/06/23 07:45:40 [INFO] consul: adding LAN server Node 57 (Addr: 127.0.0.1:18057) (DC: dc1)
2016/06/23 07:45:40 [INFO] serf: EventMemberJoin: Node 57.dc1 127.0.0.1
2016/06/23 07:45:40 [DEBUG] agent: removed check "service:redis"
2016/06/23 07:45:40 [INFO] agent: requesting shutdown
2016/06/23 07:45:40 [INFO] consul: adding WAN server Node 57.dc1 (Addr: 127.0.0.1:18057) (DC: dc1)
2016/06/23 07:45:40 [INFO] consul: shutting down server
2016/06/23 07:45:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:40 [INFO] raft: Node at 127.0.0.1:18057 [Candidate] entering Candidate state
2016/06/23 07:45:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:41 [INFO] agent: shutdown complete
--- PASS: TestAgent_unloadChecks (1.37s)
=== RUN   TestAgent_loadServices_token
2016/06/23 07:45:41 [INFO] raft: Node at 127.0.0.1:18058 [Follower] entering Follower state
2016/06/23 07:45:41 [INFO] serf: EventMemberJoin: Node 58 127.0.0.1
2016/06/23 07:45:41 [INFO] consul: adding LAN server Node 58 (Addr: 127.0.0.1:18058) (DC: dc1)
2016/06/23 07:45:41 [INFO] serf: EventMemberJoin: Node 58.dc1 127.0.0.1
2016/06/23 07:45:41 [INFO] agent: requesting shutdown
2016/06/23 07:45:41 [INFO] consul: shutting down server
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [INFO] consul: adding WAN server Node 58.dc1 (Addr: 127.0.0.1:18058) (DC: dc1)
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:41 [INFO] raft: Node at 127.0.0.1:18058 [Candidate] entering Candidate state
2016/06/23 07:45:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:42 [INFO] agent: shutdown complete
--- PASS: TestAgent_loadServices_token (0.96s)
=== RUN   TestAgent_unloadServices
2016/06/23 07:45:42 [INFO] raft: Node at 127.0.0.1:18059 [Follower] entering Follower state
2016/06/23 07:45:42 [INFO] serf: EventMemberJoin: Node 59 127.0.0.1
2016/06/23 07:45:42 [INFO] consul: adding LAN server Node 59 (Addr: 127.0.0.1:18059) (DC: dc1)
2016/06/23 07:45:42 [INFO] serf: EventMemberJoin: Node 59.dc1 127.0.0.1
2016/06/23 07:45:42 [DEBUG] agent: removed service "redis"
2016/06/23 07:45:42 [INFO] agent: requesting shutdown
2016/06/23 07:45:42 [INFO] consul: shutting down server
2016/06/23 07:45:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:42 [INFO] consul: adding WAN server Node 59.dc1 (Addr: 127.0.0.1:18059) (DC: dc1)
2016/06/23 07:45:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:42 [INFO] raft: Node at 127.0.0.1:18059 [Candidate] entering Candidate state
2016/06/23 07:45:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:43 [INFO] agent: shutdown complete
--- PASS: TestAgent_unloadServices (0.84s)
=== RUN   TestAgent_ServiceMaintenanceMode
2016/06/23 07:45:43 [INFO] raft: Node at 127.0.0.1:18060 [Follower] entering Follower state
2016/06/23 07:45:43 [INFO] serf: EventMemberJoin: Node 60 127.0.0.1
2016/06/23 07:45:43 [INFO] consul: adding LAN server Node 60 (Addr: 127.0.0.1:18060) (DC: dc1)
2016/06/23 07:45:43 [INFO] serf: EventMemberJoin: Node 60.dc1 127.0.0.1
2016/06/23 07:45:43 [INFO] consul: adding WAN server Node 60.dc1 (Addr: 127.0.0.1:18060) (DC: dc1)
2016/06/23 07:45:43 [INFO] agent: Service "redis" entered maintenance mode
2016/06/23 07:45:43 [DEBUG] agent: removed check "_service_maintenance:redis"
2016/06/23 07:45:43 [INFO] agent: Service "redis" left maintenance mode
2016/06/23 07:45:43 [INFO] agent: Service "redis" entered maintenance mode
2016/06/23 07:45:43 [INFO] agent: requesting shutdown
2016/06/23 07:45:43 [INFO] consul: shutting down server
2016/06/23 07:45:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:43 [INFO] raft: Node at 127.0.0.1:18060 [Candidate] entering Candidate state
2016/06/23 07:45:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:43 [INFO] agent: shutdown complete
--- PASS: TestAgent_ServiceMaintenanceMode (0.96s)
=== RUN   TestAgent_addCheck_restoresSnapshot
2016/06/23 07:45:44 [INFO] raft: Node at 127.0.0.1:18061 [Follower] entering Follower state
2016/06/23 07:45:44 [INFO] serf: EventMemberJoin: Node 61 127.0.0.1
2016/06/23 07:45:44 [INFO] consul: adding LAN server Node 61 (Addr: 127.0.0.1:18061) (DC: dc1)
2016/06/23 07:45:44 [INFO] serf: EventMemberJoin: Node 61.dc1 127.0.0.1
2016/06/23 07:45:44 [INFO] agent: requesting shutdown
2016/06/23 07:45:44 [INFO] consul: shutting down server
2016/06/23 07:45:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:44 [INFO] consul: adding WAN server Node 61.dc1 (Addr: 127.0.0.1:18061) (DC: dc1)
2016/06/23 07:45:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:44 [INFO] raft: Node at 127.0.0.1:18061 [Candidate] entering Candidate state
2016/06/23 07:45:44 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:44 [INFO] agent: shutdown complete
--- PASS: TestAgent_addCheck_restoresSnapshot (0.78s)
=== RUN   TestAgent_NodeMaintenanceMode
2016/06/23 07:45:45 [INFO] raft: Node at 127.0.0.1:18062 [Follower] entering Follower state
2016/06/23 07:45:45 [INFO] serf: EventMemberJoin: Node 62 127.0.0.1
2016/06/23 07:45:45 [INFO] consul: adding LAN server Node 62 (Addr: 127.0.0.1:18062) (DC: dc1)
2016/06/23 07:45:45 [INFO] serf: EventMemberJoin: Node 62.dc1 127.0.0.1
2016/06/23 07:45:45 [INFO] consul: adding WAN server Node 62.dc1 (Addr: 127.0.0.1:18062) (DC: dc1)
2016/06/23 07:45:45 [INFO] agent: Node entered maintenance mode
2016/06/23 07:45:45 [DEBUG] agent: removed check "_node_maintenance"
2016/06/23 07:45:45 [INFO] agent: Node left maintenance mode
2016/06/23 07:45:45 [INFO] agent: Node entered maintenance mode
2016/06/23 07:45:45 [INFO] agent: requesting shutdown
2016/06/23 07:45:45 [INFO] consul: shutting down server
2016/06/23 07:45:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:45 [INFO] raft: Node at 127.0.0.1:18062 [Candidate] entering Candidate state
2016/06/23 07:45:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:45 [INFO] agent: shutdown complete
--- PASS: TestAgent_NodeMaintenanceMode (0.84s)
=== RUN   TestAgent_checkStateSnapshot
2016/06/23 07:45:45 [INFO] raft: Node at 127.0.0.1:18063 [Follower] entering Follower state
2016/06/23 07:45:45 [INFO] serf: EventMemberJoin: Node 63 127.0.0.1
2016/06/23 07:45:45 [INFO] consul: adding LAN server Node 63 (Addr: 127.0.0.1:18063) (DC: dc1)
2016/06/23 07:45:45 [INFO] serf: EventMemberJoin: Node 63.dc1 127.0.0.1
2016/06/23 07:45:45 [INFO] consul: adding WAN server Node 63.dc1 (Addr: 127.0.0.1:18063) (DC: dc1)
2016/06/23 07:45:45 [DEBUG] agent: removed check "service:redis"
2016/06/23 07:45:45 [DEBUG] agent: restored health check "service:redis" from "/tmp/agent129381508/checks/60a2ef12de014a05ecdc850d9aab46da"
2016/06/23 07:45:45 [INFO] agent: requesting shutdown
2016/06/23 07:45:45 [INFO] consul: shutting down server
2016/06/23 07:45:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:45 [INFO] raft: Node at 127.0.0.1:18063 [Candidate] entering Candidate state
2016/06/23 07:45:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:46 [INFO] agent: shutdown complete
--- PASS: TestAgent_checkStateSnapshot (0.92s)
=== RUN   TestAgent_loadChecks_checkFails
2016/06/23 07:45:46 [INFO] raft: Node at 127.0.0.1:18064 [Follower] entering Follower state
2016/06/23 07:45:46 [INFO] serf: EventMemberJoin: Node 64 127.0.0.1
2016/06/23 07:45:46 [INFO] consul: adding LAN server Node 64 (Addr: 127.0.0.1:18064) (DC: dc1)
2016/06/23 07:45:46 [INFO] serf: EventMemberJoin: Node 64.dc1 127.0.0.1
2016/06/23 07:45:46 [INFO] consul: adding WAN server Node 64.dc1 (Addr: 127.0.0.1:18064) (DC: dc1)
2016/06/23 07:45:46 [WARN] agent: Failed to restore check "service:redis": ServiceID "nope" does not exist
2016/06/23 07:45:46 [DEBUG] agent: restored health check "service:redis" from "/tmp/agent352852499/checks/60a2ef12de014a05ecdc850d9aab46da"
2016/06/23 07:45:46 [INFO] agent: requesting shutdown
2016/06/23 07:45:46 [INFO] consul: shutting down server
2016/06/23 07:45:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:46 [INFO] raft: Node at 127.0.0.1:18064 [Candidate] entering Candidate state
2016/06/23 07:45:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:47 [INFO] agent: shutdown complete
--- PASS: TestAgent_loadChecks_checkFails (0.93s)
=== RUN   TestAgent_persistCheckState
2016/06/23 07:45:47 [INFO] raft: Node at 127.0.0.1:18065 [Follower] entering Follower state
2016/06/23 07:45:47 [INFO] serf: EventMemberJoin: Node 65 127.0.0.1
2016/06/23 07:45:47 [INFO] consul: adding LAN server Node 65 (Addr: 127.0.0.1:18065) (DC: dc1)
2016/06/23 07:45:47 [INFO] serf: EventMemberJoin: Node 65.dc1 127.0.0.1
2016/06/23 07:45:47 [INFO] consul: adding WAN server Node 65.dc1 (Addr: 127.0.0.1:18065) (DC: dc1)
2016/06/23 07:45:47 [INFO] agent: requesting shutdown
2016/06/23 07:45:47 [INFO] consul: shutting down server
2016/06/23 07:45:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:47 [INFO] raft: Node at 127.0.0.1:18065 [Candidate] entering Candidate state
2016/06/23 07:45:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:48 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:48 [INFO] agent: shutdown complete
--- PASS: TestAgent_persistCheckState (0.93s)
=== RUN   TestAgent_loadCheckState
2016/06/23 07:45:48 [INFO] raft: Node at 127.0.0.1:18066 [Follower] entering Follower state
2016/06/23 07:45:48 [INFO] serf: EventMemberJoin: Node 66 127.0.0.1
2016/06/23 07:45:48 [INFO] serf: EventMemberJoin: Node 66.dc1 127.0.0.1
2016/06/23 07:45:48 [INFO] consul: adding WAN server Node 66.dc1 (Addr: 127.0.0.1:18066) (DC: dc1)
2016/06/23 07:45:48 [DEBUG] agent: check state expired for "check1", not restoring
2016/06/23 07:45:48 [INFO] consul: adding LAN server Node 66 (Addr: 127.0.0.1:18066) (DC: dc1)
2016/06/23 07:45:48 [INFO] agent: requesting shutdown
2016/06/23 07:45:48 [INFO] consul: shutting down server
2016/06/23 07:45:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:48 [INFO] raft: Node at 127.0.0.1:18066 [Candidate] entering Candidate state
2016/06/23 07:45:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:49 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:49 [INFO] agent: shutdown complete
--- PASS: TestAgent_loadCheckState (0.84s)
=== RUN   TestAgent_purgeCheckState
2016/06/23 07:45:49 [INFO] raft: Node at 127.0.0.1:18067 [Follower] entering Follower state
2016/06/23 07:45:49 [INFO] serf: EventMemberJoin: Node 67 127.0.0.1
2016/06/23 07:45:49 [INFO] consul: adding LAN server Node 67 (Addr: 127.0.0.1:18067) (DC: dc1)
2016/06/23 07:45:49 [INFO] serf: EventMemberJoin: Node 67.dc1 127.0.0.1
2016/06/23 07:45:49 [INFO] consul: adding WAN server Node 67.dc1 (Addr: 127.0.0.1:18067) (DC: dc1)
2016/06/23 07:45:49 [INFO] agent: requesting shutdown
2016/06/23 07:45:49 [INFO] consul: shutting down server
2016/06/23 07:45:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:49 [INFO] raft: Node at 127.0.0.1:18067 [Candidate] entering Candidate state
2016/06/23 07:45:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:50 [INFO] agent: shutdown complete
--- PASS: TestAgent_purgeCheckState (0.88s)
=== RUN   TestAgent_GetCoordinate
2016/06/23 07:45:50 [INFO] raft: Node at 127.0.0.1:18068 [Follower] entering Follower state
2016/06/23 07:45:50 [INFO] serf: EventMemberJoin: Node 68 127.0.0.1
2016/06/23 07:45:50 [INFO] consul: adding LAN server Node 68 (Addr: 127.0.0.1:18068) (DC: dc1)
2016/06/23 07:45:50 [INFO] serf: EventMemberJoin: Node 68.dc1 127.0.0.1
2016/06/23 07:45:50 [INFO] agent: requesting shutdown
2016/06/23 07:45:50 [INFO] consul: shutting down server
2016/06/23 07:45:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:50 [INFO] raft: Node at 127.0.0.1:18068 [Candidate] entering Candidate state
2016/06/23 07:45:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:50 [INFO] agent: shutdown complete
2016/06/23 07:45:50 [INFO] serf: EventMemberJoin: Node 69 127.0.0.1
2016/06/23 07:45:50 [INFO] agent: requesting shutdown
2016/06/23 07:45:50 [INFO] consul: shutting down client
2016/06/23 07:45:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:50 [INFO] agent: shutdown complete
--- PASS: TestAgent_GetCoordinate (0.87s)
=== RUN   TestCatalogRegister
2016/06/23 07:45:51 [INFO] raft: Node at 127.0.0.1:18070 [Follower] entering Follower state
2016/06/23 07:45:51 [INFO] serf: EventMemberJoin: Node 70 127.0.0.1
2016/06/23 07:45:51 [INFO] consul: adding LAN server Node 70 (Addr: 127.0.0.1:18070) (DC: dc1)
2016/06/23 07:45:51 [INFO] serf: EventMemberJoin: Node 70.dc1 127.0.0.1
2016/06/23 07:45:51 [INFO] consul: adding WAN server Node 70.dc1 (Addr: 127.0.0.1:18070) (DC: dc1)
2016/06/23 07:45:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:51 [INFO] raft: Node at 127.0.0.1:18070 [Candidate] entering Candidate state
2016/06/23 07:45:51 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:51 [DEBUG] raft: Vote granted from 127.0.0.1:18070. Tally: 1
2016/06/23 07:45:51 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:51 [INFO] raft: Node at 127.0.0.1:18070 [Leader] entering Leader state
2016/06/23 07:45:51 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:51 [INFO] consul: New leader elected: Node 70
2016/06/23 07:45:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:52 [DEBUG] raft: Node 127.0.0.1:18070 updated peer set (2): [127.0.0.1:18070]
2016/06/23 07:45:52 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:52 [INFO] consul: member 'Node 70' joined, marking health alive
2016/06/23 07:45:53 [INFO] agent: Synced service 'foo'
2016/06/23 07:45:53 [INFO] agent: requesting shutdown
2016/06/23 07:45:53 [INFO] consul: shutting down server
2016/06/23 07:45:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:54 [INFO] agent: shutdown complete
2016/06/23 07:45:54 [DEBUG] http: Shutting down http server (127.0.0.1:18870)
--- PASS: TestCatalogRegister (3.21s)
=== RUN   TestCatalogDeregister
2016/06/23 07:45:54 [INFO] raft: Node at 127.0.0.1:18071 [Follower] entering Follower state
2016/06/23 07:45:54 [INFO] serf: EventMemberJoin: Node 71 127.0.0.1
2016/06/23 07:45:54 [INFO] consul: adding LAN server Node 71 (Addr: 127.0.0.1:18071) (DC: dc1)
2016/06/23 07:45:54 [INFO] serf: EventMemberJoin: Node 71.dc1 127.0.0.1
2016/06/23 07:45:54 [INFO] consul: adding WAN server Node 71.dc1 (Addr: 127.0.0.1:18071) (DC: dc1)
2016/06/23 07:45:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:54 [INFO] raft: Node at 127.0.0.1:18071 [Candidate] entering Candidate state
2016/06/23 07:45:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:55 [DEBUG] raft: Vote granted from 127.0.0.1:18071. Tally: 1
2016/06/23 07:45:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:55 [INFO] raft: Node at 127.0.0.1:18071 [Leader] entering Leader state
2016/06/23 07:45:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:55 [INFO] consul: New leader elected: Node 71
2016/06/23 07:45:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:55 [DEBUG] raft: Node 127.0.0.1:18071 updated peer set (2): [127.0.0.1:18071]
2016/06/23 07:45:55 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:56 [INFO] consul: member 'Node 71' joined, marking health alive
2016/06/23 07:45:56 [INFO] agent: requesting shutdown
2016/06/23 07:45:56 [INFO] consul: shutting down server
2016/06/23 07:45:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:57 [INFO] agent: shutdown complete
2016/06/23 07:45:57 [DEBUG] http: Shutting down http server (127.0.0.1:18871)
--- PASS: TestCatalogDeregister (3.14s)
=== RUN   TestCatalogDatacenters
2016/06/23 07:45:58 [INFO] raft: Node at 127.0.0.1:18072 [Follower] entering Follower state
2016/06/23 07:45:58 [INFO] serf: EventMemberJoin: Node 72 127.0.0.1
2016/06/23 07:45:58 [INFO] consul: adding LAN server Node 72 (Addr: 127.0.0.1:18072) (DC: dc1)
2016/06/23 07:45:58 [INFO] serf: EventMemberJoin: Node 72.dc1 127.0.0.1
2016/06/23 07:45:58 [INFO] consul: adding WAN server Node 72.dc1 (Addr: 127.0.0.1:18072) (DC: dc1)
2016/06/23 07:45:58 [INFO] agent: requesting shutdown
2016/06/23 07:45:58 [INFO] consul: shutting down server
2016/06/23 07:45:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:58 [INFO] raft: Node at 127.0.0.1:18072 [Candidate] entering Candidate state
2016/06/23 07:45:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:59 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:59 [INFO] agent: shutdown complete
2016/06/23 07:45:59 [DEBUG] http: Shutting down http server (127.0.0.1:18872)
--- PASS: TestCatalogDatacenters (1.93s)
=== RUN   TestCatalogNodes
2016/06/23 07:45:59 [INFO] raft: Node at 127.0.0.1:18073 [Follower] entering Follower state
2016/06/23 07:45:59 [INFO] serf: EventMemberJoin: Node 73 127.0.0.1
2016/06/23 07:45:59 [INFO] consul: adding LAN server Node 73 (Addr: 127.0.0.1:18073) (DC: dc1)
2016/06/23 07:45:59 [INFO] serf: EventMemberJoin: Node 73.dc1 127.0.0.1
2016/06/23 07:45:59 [INFO] consul: adding WAN server Node 73.dc1 (Addr: 127.0.0.1:18073) (DC: dc1)
2016/06/23 07:46:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:00 [INFO] raft: Node at 127.0.0.1:18073 [Candidate] entering Candidate state
2016/06/23 07:46:00 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:00 [DEBUG] raft: Vote granted from 127.0.0.1:18073. Tally: 1
2016/06/23 07:46:00 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:00 [INFO] raft: Node at 127.0.0.1:18073 [Leader] entering Leader state
2016/06/23 07:46:00 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:00 [INFO] consul: New leader elected: Node 73
2016/06/23 07:46:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:00 [DEBUG] raft: Node 127.0.0.1:18073 updated peer set (2): [127.0.0.1:18073]
2016/06/23 07:46:00 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:01 [INFO] consul: member 'Node 73' joined, marking health alive
2016/06/23 07:46:02 [INFO] agent: requesting shutdown
2016/06/23 07:46:02 [INFO] consul: shutting down server
2016/06/23 07:46:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:02 [INFO] agent: shutdown complete
2016/06/23 07:46:02 [DEBUG] http: Shutting down http server (127.0.0.1:18873)
--- PASS: TestCatalogNodes (3.16s)
=== RUN   TestCatalogNodes_Blocking
2016/06/23 07:46:03 [INFO] raft: Node at 127.0.0.1:18074 [Follower] entering Follower state
2016/06/23 07:46:03 [INFO] serf: EventMemberJoin: Node 74 127.0.0.1
2016/06/23 07:46:03 [INFO] consul: adding LAN server Node 74 (Addr: 127.0.0.1:18074) (DC: dc1)
2016/06/23 07:46:03 [INFO] serf: EventMemberJoin: Node 74.dc1 127.0.0.1
2016/06/23 07:46:03 [INFO] consul: adding WAN server Node 74.dc1 (Addr: 127.0.0.1:18074) (DC: dc1)
2016/06/23 07:46:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:03 [INFO] raft: Node at 127.0.0.1:18074 [Candidate] entering Candidate state
2016/06/23 07:46:04 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:04 [DEBUG] raft: Vote granted from 127.0.0.1:18074. Tally: 1
2016/06/23 07:46:04 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:04 [INFO] raft: Node at 127.0.0.1:18074 [Leader] entering Leader state
2016/06/23 07:46:04 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:04 [INFO] consul: New leader elected: Node 74
2016/06/23 07:46:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:04 [DEBUG] raft: Node 127.0.0.1:18074 updated peer set (2): [127.0.0.1:18074]
2016/06/23 07:46:04 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:04 [INFO] consul: member 'Node 74' joined, marking health alive
2016/06/23 07:46:05 [INFO] agent: requesting shutdown
2016/06/23 07:46:05 [INFO] consul: shutting down server
2016/06/23 07:46:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:05 [INFO] agent: shutdown complete
2016/06/23 07:46:05 [DEBUG] http: Shutting down http server (127.0.0.1:18874)
--- PASS: TestCatalogNodes_Blocking (3.57s)
=== RUN   TestCatalogNodes_DistanceSort
2016/06/23 07:46:06 [INFO] raft: Node at 127.0.0.1:18075 [Follower] entering Follower state
2016/06/23 07:46:06 [INFO] serf: EventMemberJoin: Node 75 127.0.0.1
2016/06/23 07:46:06 [INFO] consul: adding LAN server Node 75 (Addr: 127.0.0.1:18075) (DC: dc1)
2016/06/23 07:46:06 [INFO] serf: EventMemberJoin: Node 75.dc1 127.0.0.1
2016/06/23 07:46:06 [INFO] consul: adding WAN server Node 75.dc1 (Addr: 127.0.0.1:18075) (DC: dc1)
2016/06/23 07:46:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:06 [INFO] raft: Node at 127.0.0.1:18075 [Candidate] entering Candidate state
2016/06/23 07:46:07 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:07 [DEBUG] raft: Vote granted from 127.0.0.1:18075. Tally: 1
2016/06/23 07:46:07 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:07 [INFO] raft: Node at 127.0.0.1:18075 [Leader] entering Leader state
2016/06/23 07:46:07 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:07 [INFO] consul: New leader elected: Node 75
2016/06/23 07:46:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:08 [DEBUG] raft: Node 127.0.0.1:18075 updated peer set (2): [127.0.0.1:18075]
2016/06/23 07:46:08 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:08 [INFO] consul: member 'Node 75' joined, marking health alive
2016/06/23 07:46:09 [INFO] agent: requesting shutdown
2016/06/23 07:46:09 [INFO] consul: shutting down server
2016/06/23 07:46:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:09 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
2016/06/23 07:46:09 [INFO] agent: shutdown complete
2016/06/23 07:46:09 [DEBUG] http: Shutting down http server (127.0.0.1:18875)
--- FAIL: TestCatalogNodes_DistanceSort (3.90s)
	catalog_endpoint_test.go:294: bad: [0x110876e0 0x11087590 0x11040a50]
=== RUN   TestCatalogServices
2016/06/23 07:46:10 [INFO] raft: Node at 127.0.0.1:18076 [Follower] entering Follower state
2016/06/23 07:46:10 [INFO] serf: EventMemberJoin: Node 76 127.0.0.1
2016/06/23 07:46:10 [INFO] consul: adding LAN server Node 76 (Addr: 127.0.0.1:18076) (DC: dc1)
2016/06/23 07:46:10 [INFO] serf: EventMemberJoin: Node 76.dc1 127.0.0.1
2016/06/23 07:46:10 [INFO] consul: adding WAN server Node 76.dc1 (Addr: 127.0.0.1:18076) (DC: dc1)
2016/06/23 07:46:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:10 [INFO] raft: Node at 127.0.0.1:18076 [Candidate] entering Candidate state
2016/06/23 07:46:11 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:11 [DEBUG] raft: Vote granted from 127.0.0.1:18076. Tally: 1
2016/06/23 07:46:11 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:11 [INFO] raft: Node at 127.0.0.1:18076 [Leader] entering Leader state
2016/06/23 07:46:11 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:11 [INFO] consul: New leader elected: Node 76
2016/06/23 07:46:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:11 [DEBUG] raft: Node 127.0.0.1:18076 updated peer set (2): [127.0.0.1:18076]
2016/06/23 07:46:11 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:11 [INFO] consul: member 'Node 76' joined, marking health alive
2016/06/23 07:46:12 [INFO] agent: requesting shutdown
2016/06/23 07:46:12 [INFO] consul: shutting down server
2016/06/23 07:46:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:12 [INFO] agent: shutdown complete
2016/06/23 07:46:12 [DEBUG] http: Shutting down http server (127.0.0.1:18876)
--- PASS: TestCatalogServices (2.56s)
=== RUN   TestCatalogServiceNodes
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:18077 [Follower] entering Follower state
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 77 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding LAN server Node 77 (Addr: 127.0.0.1:18077) (DC: dc1)
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 77.dc1 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding WAN server Node 77.dc1 (Addr: 127.0.0.1:18077) (DC: dc1)
2016/06/23 07:46:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:18077 [Candidate] entering Candidate state
2016/06/23 07:46:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:13 [DEBUG] raft: Vote granted from 127.0.0.1:18077. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:18077 [Leader] entering Leader state
2016/06/23 07:46:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:13 [INFO] consul: New leader elected: Node 77
2016/06/23 07:46:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:13 [DEBUG] raft: Node 127.0.0.1:18077 updated peer set (2): [127.0.0.1:18077]
2016/06/23 07:46:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:14 [INFO] consul: member 'Node 77' joined, marking health alive
2016/06/23 07:46:14 [INFO] agent: requesting shutdown
2016/06/23 07:46:14 [INFO] consul: shutting down server
2016/06/23 07:46:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:15 [INFO] agent: shutdown complete
2016/06/23 07:46:15 [DEBUG] http: Shutting down http server (127.0.0.1:18877)
--- PASS: TestCatalogServiceNodes (2.66s)
=== RUN   TestCatalogServiceNodes_DistanceSort
2016/06/23 07:46:15 [INFO] raft: Node at 127.0.0.1:18078 [Follower] entering Follower state
2016/06/23 07:46:15 [INFO] serf: EventMemberJoin: Node 78 127.0.0.1
2016/06/23 07:46:15 [INFO] consul: adding LAN server Node 78 (Addr: 127.0.0.1:18078) (DC: dc1)
2016/06/23 07:46:15 [INFO] serf: EventMemberJoin: Node 78.dc1 127.0.0.1
2016/06/23 07:46:15 [INFO] consul: adding WAN server Node 78.dc1 (Addr: 127.0.0.1:18078) (DC: dc1)
2016/06/23 07:46:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:15 [INFO] raft: Node at 127.0.0.1:18078 [Candidate] entering Candidate state
2016/06/23 07:46:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:16 [DEBUG] raft: Vote granted from 127.0.0.1:18078. Tally: 1
2016/06/23 07:46:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:16 [INFO] raft: Node at 127.0.0.1:18078 [Leader] entering Leader state
2016/06/23 07:46:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:16 [INFO] consul: New leader elected: Node 78
2016/06/23 07:46:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:16 [DEBUG] raft: Node 127.0.0.1:18078 updated peer set (2): [127.0.0.1:18078]
2016/06/23 07:46:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:17 [INFO] consul: member 'Node 78' joined, marking health alive
2016/06/23 07:46:18 [INFO] agent: requesting shutdown
2016/06/23 07:46:18 [INFO] consul: shutting down server
2016/06/23 07:46:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:18 [INFO] agent: shutdown complete
2016/06/23 07:46:18 [DEBUG] http: Shutting down http server (127.0.0.1:18878)
--- FAIL: TestCatalogServiceNodes_DistanceSort (3.44s)
	catalog_endpoint_test.go:505: bad: [0x11180dc0 0x11180e10]
=== RUN   TestCatalogNodeServices
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:18079 [Follower] entering Follower state
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 79 127.0.0.1
2016/06/23 07:46:19 [INFO] consul: adding LAN server Node 79 (Addr: 127.0.0.1:18079) (DC: dc1)
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 79.dc1 127.0.0.1
2016/06/23 07:46:19 [INFO] consul: adding WAN server Node 79.dc1 (Addr: 127.0.0.1:18079) (DC: dc1)
2016/06/23 07:46:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:18079 [Candidate] entering Candidate state
2016/06/23 07:46:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:19 [DEBUG] raft: Vote granted from 127.0.0.1:18079. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:18079 [Leader] entering Leader state
2016/06/23 07:46:19 [INFO] consul: New leader elected: Node 79
2016/06/23 07:46:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:20 [DEBUG] raft: Node 127.0.0.1:18079 updated peer set (2): [127.0.0.1:18079]
2016/06/23 07:46:20 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:20 [INFO] consul: member 'Node 79' joined, marking health alive
2016/06/23 07:46:21 [INFO] agent: requesting shutdown
2016/06/23 07:46:21 [INFO] consul: shutting down server
2016/06/23 07:46:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:21 [INFO] agent: shutdown complete
2016/06/23 07:46:21 [DEBUG] http: Shutting down http server (127.0.0.1:18879)
--- PASS: TestCatalogNodeServices (3.19s)
=== RUN   TestCheckMonitor_Passing
2016/06/23 07:46:21 [DEBUG] agent: pausing 1.593651ms before first invocation of exit 0
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 0' output: 
2016/06/23 07:46:21 [DEBUG] agent: Check 'foo' is passing
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 0' output: 
2016/06/23 07:46:21 [DEBUG] agent: Check 'foo' is passing
--- PASS: TestCheckMonitor_Passing (0.03s)
=== RUN   TestCheckMonitor_Warning
2016/06/23 07:46:21 [DEBUG] agent: pausing 2.510802ms before first invocation of exit 1
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 1' output: 
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now warning
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 1' output: 
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now warning
--- PASS: TestCheckMonitor_Warning (0.03s)
=== RUN   TestCheckMonitor_Critical
2016/06/23 07:46:21 [DEBUG] agent: pausing 7.603852ms before first invocation of exit 2
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 2' output: 
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now critical
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 2' output: 
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now critical
--- PASS: TestCheckMonitor_Critical (0.04s)
=== RUN   TestCheckMonitor_BadCmd
2016/06/23 07:46:21 [DEBUG] agent: pausing 5.6145ms before first invocation of foobarbaz
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'foobarbaz' output: /bin/sh: 1: foobarbaz: not found
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now critical
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'foobarbaz' output: /bin/sh: 1: foobarbaz: not found
2016/06/23 07:46:21 [WARN] agent: Check 'foo' is now critical
--- PASS: TestCheckMonitor_BadCmd (0.06s)
=== RUN   TestCheckMonitor_RandomStagger
2016/06/23 07:46:21 [DEBUG] agent: pausing 8.782304ms before first invocation of exit 0
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 0' output: 
2016/06/23 07:46:21 [DEBUG] agent: Check 'foo' is passing
--- PASS: TestCheckMonitor_RandomStagger (0.05s)
=== RUN   TestCheckMonitor_LimitOutput
2016/06/23 07:46:21 [DEBUG] agent: pausing 14.771216ms before first invocation of od -N 81920 /dev/urandom
2016/06/23 07:46:21 [DEBUG] agent: check 'foo' script 'exit 0' output: 
2016/06/23 07:46:21 [DEBUG] agent: Check 'foo' is passing
--- PASS: TestCheckMonitor_LimitOutput (0.05s)
=== RUN   TestCheckTTL
2016/06/23 07:46:22 [DEBUG] agent: Check 'foo' status is now passing
2016/06/23 07:46:22 [WARN] agent: Check 'foo' missed TTL, is now critical
--- PASS: TestCheckTTL (0.20s)
=== RUN   TestCheckHTTPCritical
http://127.0.0.1:41157
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.456766ms before first HTTP request of http://127.0.0.1:41157
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script 'od -N 81920 /dev/urandom' output: Captured 4096 of 327688 bytes
...
130474 040411 042507 144376 004303 030616 161352 167462
0236020 142321 130751 133227 160072 042110 044745 162022 054660
0236040 047062 142736 012561 156705 151147 111134 077132 126052
0236060 160271 036336 156071 021415 006222 020707 074141 171621
0236100 111034 146131 046371 175131 176072 123271 062020 106773
0236120 025732 064372 163657 107007 110301 167547 013656 043137
0236140 103232 051535 114323 102223 166443 132145 176030 101417
0236160 006742 031132 141607 151640 151257 110571 030430 040464
0236200 172760 101131 046662 003576 025412 005114 105124 066725
0236220 044204 051652 067046 004252 113005 001645 174244 075735
0236240 075453 064512 154350 014262 124004 020241 031405 137766
0236260 145017 007205 046345 014352 076015 070146 067150 124024
0236300 130446 013016 142667 072743 172226 112343 013116 150762
0236320 160016 006167 172617 162603 052153 145025 105743 106340
0236340 126210 113223 153557 124034 167200 175046 034607 030500
0236360 057045 040367 143501 045777 045007 053102 154541 062363
0236400 112561 023037 167741 064023 026511 174100 172463 051641
0236420 076131 111007 040155 053736 104551 024517 106411 050205
0236440 172166 011721 165035 033026 121330 062737 142316 023204
0236460 012450 173464 157704 043150 043410 134563 030010 144141
0236500 061426 044437 040761 157417 067441 061246 012775 104631
0236520 007313 027274 125222 000407 112536 123520 071531 074667
0236540 176541 162033 116146 006307 025013 031715 072416 046544
0236560 126250 001674 167650 071046 064460 036167 054241 143220
0236600 132302 177110 146310 174710 137773 035514 072052 024472
0236620 126135 156413 177706 177152 004613 063477 015624 053320
0236640 114770 177165 032233 105476 101004 175462 146522 135321
0236660 032763 004552 076074 002130 060044 015314 041060 114207
0236700 067537 144551 111572 037252 102252 127135 104552 170036
0236720 152547 036460 000771 042642 170735 005475 000243 017365
0236740 061736 015101 064577 146343 035445 032361 140451 134310
0236760 107142 101125 170060 027573 122564 045132 052002 012416
0237000 060607 003356 177774 131325 070060 005433 020404 015155
0237020 073515 064154 007362 061476 017321 025476 051203 042061
0237040 120516 173502 134767 107337 173123 117545 000001 117536
0237060 166714 041416 033077 051371 042070 145534 044252 076047
0237100 062344 006105 012051 133557 066632 031072 154726 047213
0237120 173245 121555 071711 123512 104012 075077 110362 031040
0237140 137650 034112 044750 130064 133401 101331 175404 072510
0237160 116074 167122 067470 043454 147461 146240 167536 147432
0237200 050302 164130 013050 102055 120001 167532 124030 046610
0237220 061233 041551 050416 027173 057704 017533 113124 173137
0237240 031140 055333 043625 127527 061215 106331 035236 061074
0237260 114717 177303 155715 077411 152075 004352 144057 017616
0237300 147155 012717 165641 071764 040301 025030 167431 104273
0237320 105345 032431 036247 131475 066346 026715 066174 073024
0237340 113257 035473 133065 025660 045212 137343 052114 030615
0237360 131755 134533 077372 135633 041474 167522 045657 025571
0237400 130341 060556 166220 141514 022253 032607 145423 140003
0237420 164711 036047 136010 155263 171460 156700 074537 045526
0237440 026220 067677 006305 105343 102600 023545 054462 167361
0237460 072051 034440 104204 017566 160170 172540 136065 164631
0237500 045723 134105 116775 105415 025737 166214 036626 133313
0237520 161661 024006 137226 031613 143471 160120 157546 012414
0237540 161404 020746 012441 106530 154711 055702 105076 013264
0237560 053155 140142 113174 013023 060332 035341 012220 110564
0237600 051654 055011 124232 073677 140452 143273 170542 161157
0237620 033263 022402 113517 144001 035646 030626 025056 014251
0237640 111260 114271 000474 000472 117416 040362 156226 032551
0237660 157073 024135 064222 157766 060730 065274 125365 014213
0237700 147435 103245 120501 035233 115212 067212 110067 145547
0237720 066327 155660 127023 112326 066262 134550 137354 117227
0237740 155135 167146 073626 157261 061575 111107 026413 023643
0237760 135122 044004 137223 162034 104376 133757 147427 161111
0240000
2016/06/23 07:46:22 [DEBUG] agent: Check 'foo' is passing
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.406454ms before first HTTP request of http://127.0.0.1:33781
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.153909ms before first HTTP request of http://127.0.0.1:33903
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [DEBUG] agent: pausing 3.947372ms before first HTTP request of http://127.0.0.1:42854
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [DEBUG] agent: pausing 1.634893ms before first HTTP request of http://127.0.0.1:39828
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now critical
--- PASS: TestCheckHTTPCritical (0.25s)
=== RUN   TestCheckHTTPPassing
2016/06/23 07:46:22 [DEBUG] agent: pausing 2.769079ms before first HTTP request of http://127.0.0.1:37129
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.672517ms before first HTTP request of http://127.0.0.1:44048
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: pausing 646.518µs before first HTTP request of http://127.0.0.1:42975
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: pausing 6.540567ms before first HTTP request of http://127.0.0.1:32865
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
--- PASS: TestCheckHTTPPassing (0.20s)
=== RUN   TestCheckHTTPWarning
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.337169ms before first HTTP request of http://127.0.0.1:36965
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now warning
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now warning
2016/06/23 07:46:22 [WARN] agent: check 'foo' is now warning
--- PASS: TestCheckHTTPWarning (0.05s)
=== RUN   TestCheckHTTPTimeout
2016/06/23 07:46:22 [WARN] agent: http request failed 'http://127.0.0.1:36965': Get http://127.0.0.1:36965: EOF
2016/06/23 07:46:22 [DEBUG] agent: pausing 9.754271ms before first HTTP request of http://127.0.0.1:44602
2016/06/23 07:46:22 [WARN] agent: http request failed 'http://127.0.0.1:44602': Get http://127.0.0.1:44602: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2016/06/23 07:46:22 [WARN] agent: http request failed 'http://127.0.0.1:44602': Get http://127.0.0.1:44602: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
2016/06/23 07:46:22 [WARN] agent: http request failed 'http://127.0.0.1:44602': Get http://127.0.0.1:44602: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
--- PASS: TestCheckHTTPTimeout (0.05s)
=== RUN   TestCheckHTTP_disablesKeepAlives
--- PASS: TestCheckHTTP_disablesKeepAlives (0.00s)
=== RUN   TestCheckTCPCritical
2016/06/23 07:46:22 [DEBUG] agent: pausing 553.269µs before first socket connection of 127.0.0.1:0
2016/06/23 07:46:22 [DEBUG] agent: pausing 486.435556ms before first HTTP request of http://foo.bar/baz
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:0': dial tcp 127.0.0.1:0: getsockopt: connection refused
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:0': dial tcp 127.0.0.1:0: getsockopt: connection refused
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:0': dial tcp 127.0.0.1:0: getsockopt: connection refused
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:0': dial tcp 127.0.0.1:0: getsockopt: connection refused
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:0': dial tcp 127.0.0.1:0: getsockopt: connection refused
--- PASS: TestCheckTCPCritical (0.05s)
=== RUN   TestCheckTCPPassing
2016/06/23 07:46:22 [DEBUG] agent: pausing 6.878371ms before first socket connection of 127.0.0.1:45314
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: pausing 1.901207ms before first socket connection of [::1]:39575
2016/06/23 07:46:22 [WARN] agent: socket connection failed '127.0.0.1:45314': dial tcp 127.0.0.1:45314: getsockopt: connection refused
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' is passing
--- PASS: TestCheckTCPPassing (0.10s)
=== RUN   TestDockerCheckWhenExecReturnsSuccessExitCode
2016/06/23 07:46:22 [DEBUG] agent: pausing 2.943294ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:22 [DEBUG] agent: check 'foo' script '/health.sh' output: output
--- PASS: TestDockerCheckWhenExecReturnsSuccessExitCode (0.05s)
=== RUN   TestDockerCheckWhenExecCreationFails
2016/06/23 07:46:22 [DEBUG] agent: pausing 8.753133ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:22 [DEBUG] agent: Error while creating Exec: Exec Creation Failed
2016/06/23 07:46:22 [DEBUG] agent: Error while creating Exec: Exec Creation Failed
2016/06/23 07:46:23 [DEBUG] agent: Error while creating Exec: Exec Creation Failed
2016/06/23 07:46:23 [DEBUG] agent: Error while creating Exec: Exec Creation Failed
2016/06/23 07:46:23 [DEBUG] agent: Error while creating Exec: Exec Creation Failed
--- PASS: TestDockerCheckWhenExecCreationFails (0.05s)
=== RUN   TestDockerCheckWhenExitCodeIsNonZero
2016/06/23 07:46:23 [DEBUG] agent: pausing 8.749477ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [WARN] agent: Check 'foo' is now critical
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [WARN] agent: Check 'foo' is now critical
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [WARN] agent: Check 'foo' is now critical
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [WARN] agent: Check 'foo' is now critical
--- PASS: TestDockerCheckWhenExitCodeIsNonZero (0.05s)
=== RUN   TestDockerCheckWhenExitCodeIsone
2016/06/23 07:46:23 [DEBUG] agent: pausing 5.595195ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] Check failed with exit code: 1
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] Check failed with exit code: 1
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] Check failed with exit code: 1
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] Check failed with exit code: 1
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] Check failed with exit code: 1
--- PASS: TestDockerCheckWhenExitCodeIsone (0.05s)
=== RUN   TestDockerCheckWhenExecStartFails
2016/06/23 07:46:23 [DEBUG] agent: pausing 2.872262ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] Error in executing health checks: Couldn't Start Exec
2016/06/23 07:46:23 [DEBUG] Error in executing health checks: Couldn't Start Exec
2016/06/23 07:46:23 [DEBUG] Error in executing health checks: Couldn't Start Exec
2016/06/23 07:46:23 [DEBUG] Error in executing health checks: Couldn't Start Exec
2016/06/23 07:46:23 [DEBUG] Error in executing health checks: Couldn't Start Exec
--- PASS: TestDockerCheckWhenExecStartFails (0.05s)
=== RUN   TestDockerCheckWhenExecInfoFails
2016/06/23 07:46:23 [DEBUG] agent: pausing 3.300867ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [DEBUG] Error in inspecting check result : Unable to query exec info
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [DEBUG] Error in inspecting check result : Unable to query exec info
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [DEBUG] Error in inspecting check result : Unable to query exec info
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [DEBUG] Error in inspecting check result : Unable to query exec info
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: 
2016/06/23 07:46:23 [DEBUG] Error in inspecting check result : Unable to query exec info
--- PASS: TestDockerCheckWhenExecInfoFails (0.05s)
=== RUN   TestDockerCheckDefaultToSh
2016/06/23 07:46:23 [DEBUG] agent: pausing 651.979µs before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
--- PASS: TestDockerCheckDefaultToSh (0.05s)
=== RUN   TestDockerCheckUseShellFromEnv
2016/06/23 07:46:23 [DEBUG] agent: pausing 2.630578ms before first invocation of /bin/bash -c /health.sh in container 54432bad1fc7
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: output
--- PASS: TestDockerCheckUseShellFromEnv (0.05s)
=== RUN   TestDockerCheckTruncateOutput
2016/06/23 07:46:23 [DEBUG] agent: pausing 1.471739ms before first invocation of /bin/sh -c /health.sh in container 54432bad1fc7
--- PASS: TestDockerCheckTruncateOutput (0.05s)
=== RUN   TestCommand_implements
--- PASS: TestCommand_implements (0.00s)
=== RUN   TestValidDatacenter
--- PASS: TestValidDatacenter (0.00s)
=== RUN   TestRetryJoin
2016/06/23 07:46:23 [DEBUG] agent: check 'foo' script '/health.sh' output: Captured 4096 of 327688 bytes
...
020337 023054 145633 154024 145761 032166 123542 147555
0236020 016652 130742 136342 152514 053776 044115 027646 056131
0236040 153451 015155 110377 150121 057567 127455 163671 036622
0236060 172761 022553 133104 056232 066041 022253 102412 164433
0236100 005220 153023 143047 124123 150747 125777 063021 047615
0236120 061062 013060 161260 152324 132101 030214 062303 134343
0236140 133675 173117 143546 051004 004770 115206 101622 116645
0236160 154434 052173 025165 056114 155675 124273 076241 113705
0236200 166641 025355 103243 072022 172114 001643 077104 015170
0236220 006445 004433 043545 067233 020564 050507 142005 123436
0236240 125153 001654 117050 167624 104503 133131 122237 137652
0236260 165313 006040 070465 142146 165147 043344 060461 121515
0236300 052053 030636 034022 160005 140552 054146 046525 053535
0236320 054116 162537 003347 076254 143222 103721 006663 103711
0236340 040565 067505 144053 124455 016153 021311 001645 001016
0236360 046073 000731 112523 104632 045731 052250 103676 122351
0236400 111475 004126 073313 021432 145147 146130 006164 161335
0236420 073456 111013 121367 066770 006261 117665 023500 041050
0236440 103243 002656 070261 103645 154006 066413 014621 106340
0236460 033742 136445 005437 016465 106116 151614 043566 037300
0236500 146247 122634 154103 106115 073241 040136 073405 145760
0236520 036301 171301 143526 171614 024775 175564 016071 116674
0236540 126042 151572 062326 017560 061171 035344 162507 075320
0236560 014535 115732 005411 157226 050616 167551 172470 145154
0236600 021634 175200 004027 155131 014451 057154 150302 122354
0236620 052427 131545 142667 073313 001426 076524 155305 077542
0236640 020224 011531 023515 141167 113404 050412 131163 063517
0236660 110512 063410 171332 137526 012723 125034 075720 052514
0236700 066201 025077 146555 125172 001443 077245 050102 142475
0236720 177062 135206 034516 003357 151567 040033 033541 177077
0236740 105034 130430 150613 152775 001114 064176 114247 042117
0236760 123635 171741 016155 044552 161165 032304 015615 116351
0237000 115307 052710 075206 172503 166313 175060 154761 131007
0237020 071234 122072 147463 134644 153026 116621 116643 070133
0237040 162230 160675 175063 130466 047267 002772 010137 171400
0237060 166756 161632 070136 136460 120003 040036 023461 162143
0237100 067560 012377 005717 076313 002217 077467 005730 001331
0237120 176222 123650 152051 042064 000746 170727 110565 002520
0237140 033367 166000 121267 177673 157152 112471 131743 134060
0237160 040741 060064 007606 174052 125620 164220 167231 177754
0237200 150303 150014 046466 106776 111102 074414 054475 111142
0237220 012324 145345 127771 121037 111165 054203 015164 112143
0237240 062606 135210 113761 101300 143630 163404 000342 131026
0237260 075307 112124 062166 071541 133305 152223 024135 114076
0237300 123355 036175 075152 140467 007021 142710 162407 122035
0237320 042405 076723 165310 154777 116771 117065 116672 117630
0237340 042412 064703 167500 060376 031266 171610 101245 145215
0237360 117654 133210 173714 020721 125740 051214 023611 050171
0237400 035264 012673 163364 000221 174610 107652 007777 150764
0237420 006175 064452 021411 020132 106630 144561 164623 034551
0237440 132324 022613 116467 071500 122260 171136 115417 110453
0237460 132713 167131 076775 065137 075262 110620 140010 052722
0237500 025635 176012 050010 017547 121315 123057 017617 145504
0237520 045463 065175 151776 033515 173011 131441 053471 004643
0237540 066476 106227 012220 152426 056615 077630 054765 062144
0237560 116651 063647 154150 114736 072130 172010 033033 036172
0237600 151412 013300 150025 024326 107354 000173 035711 036627
0237620 025222 063707 045266 111012 016150 077260 004005 047051
0237640 154660 126130 053770 131433 102572 134014 011364 014474
0237660 003021 160443 033267 102565 052253 135370 062723 121103
0237700 122727 003177 117335 115660 030672 046033 027024 132040
0237720 051710 134157 034330 046074 130665 136415 172635 177432
0237740 110003 122221 134543 155275 174340 053742 176545 152525
0237760 001213 000637 073652 042455 002265 063320 025455 021522
0240000
2016/06/23 07:46:23 [INFO] raft: Node at 127.0.0.1:18080 [Follower] entering Follower state
2016/06/23 07:46:23 [INFO] serf: EventMemberJoin: Node 80 127.0.0.1
2016/06/23 07:46:23 [INFO] consul: adding LAN server Node 80 (Addr: 127.0.0.1:18080) (DC: dc1)
2016/06/23 07:46:23 [INFO] serf: EventMemberJoin: Node 80.dc1 127.0.0.1
2016/06/23 07:46:23 [INFO] consul: adding WAN server Node 80.dc1 (Addr: 127.0.0.1:18080) (DC: dc1)
2016/06/23 07:46:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:24 [INFO] raft: Node at 127.0.0.1:18080 [Candidate] entering Candidate state
2016/06/23 07:46:24 [DEBUG] memberlist: TCP connection from=127.0.0.1:54240
2016/06/23 07:46:24 [DEBUG] memberlist: TCP connection from=127.0.0.1:45244
2016/06/23 07:46:24 [INFO] serf: EventMemberJoin: "Node 81".dc1 127.0.0.1
2016/06/23 07:46:24 [INFO] serf: EventMemberJoin: "Node 81" 127.0.0.1
2016/06/23 07:46:24 [INFO] consul: adding LAN server "Node 81" (Addr: 127.0.0.1:8300) (DC: dc1)
2016/06/23 07:46:24 [INFO] consul: adding WAN server "Node 81".dc1 (Addr: 127.0.0.1:8300) (DC: dc1)
2016/06/23 07:46:24 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:24 [DEBUG] raft: Vote granted from 127.0.0.1:18080. Tally: 1
2016/06/23 07:46:24 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:24 [INFO] raft: Node at 127.0.0.1:18080 [Leader] entering Leader state
2016/06/23 07:46:24 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:24 [INFO] consul: New leader elected: Node 80
2016/06/23 07:46:24 [DEBUG] serf: messageJoinType: "Node 81"
2016/06/23 07:46:24 [DEBUG] serf: messageLeaveType: "Node 81".dc1
2016/06/23 07:46:24 [DEBUG] serf: messageJoinType: "Node 81".dc1
2016/06/23 07:46:24 [DEBUG] serf: messageJoinType: "Node 81"
2016/06/23 07:46:24 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:24 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:25 [DEBUG] raft: Node 127.0.0.1:18080 updated peer set (2): [127.0.0.1:18080]
2016/06/23 07:46:25 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:25 [DEBUG] serf: messageJoinType: "Node 81"
2016/06/23 07:46:25 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:25 [DEBUG] serf: messageLeaveType: "Node 81".dc1
2016/06/23 07:46:25 [DEBUG] serf: messageJoinType: "Node 81".dc1
2016/06/23 07:46:25 [DEBUG] serf: messageJoinType: "Node 81"
2016/06/23 07:46:25 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:25 [DEBUG] serf: messageLeaveType: "Node 81".dc1
2016/06/23 07:46:25 [DEBUG] serf: messageJoinType: "Node 81".dc1
2016/06/23 07:46:25 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:25 [DEBUG] serf: messageLeaveType: "Node 81".dc1
2016/06/23 07:46:25 [DEBUG] serf: messageJoinType: "Node 81".dc1
2016/06/23 07:46:25 [INFO] serf: EventMemberLeave: "Node 81".dc1 127.0.0.1
2016/06/23 07:46:25 [INFO] consul: removing WAN server "Node 81".dc1 (Addr: 127.0.0.1:8300) (DC: dc1)
2016/06/23 07:46:25 [INFO] consul: member 'Node 80' joined, marking health alive
2016/06/23 07:46:25 [DEBUG] raft: Node 127.0.0.1:18080 updated peer set (2): [127.0.0.1:8300 127.0.0.1:18080]
2016/06/23 07:46:25 [INFO] raft: Added peer 127.0.0.1:8300, starting replication
2016/06/23 07:46:26 [DEBUG] raft: Failed to contact 127.0.0.1:8300 in 327.493356ms
2016/06/23 07:46:26 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:46:26 [INFO] raft: Node at 127.0.0.1:18080 [Follower] entering Follower state
2016/06/23 07:46:26 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:46:26 [INFO] consul: cluster leadership lost
2016/06/23 07:46:26 [ERR] consul: failed to reconcile member: {"Node 81" 127.0.0.1 8301 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:: port:8300 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:26 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:46:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:26 [WARN] raft: AppendEntries to 127.0.0.1:8300 rejected, sending older logs (next: 1)
2016/06/23 07:46:26 [INFO] raft: Node at 127.0.0.1:18080 [Candidate] entering Candidate state
2016/06/23 07:46:26 [INFO] raft: pipelining replication to peer 127.0.0.1:8300
2016/06/23 07:46:26 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:8300
2016/06/23 07:46:26 [DEBUG] serf: messageLeaveType: "Node 81"
2016/06/23 07:46:26 [DEBUG] serf: messageLeaveType: "Node 81"
2016/06/23 07:46:26 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:26 [DEBUG] raft: Vote granted from 127.0.0.1:18080. Tally: 1
2016/06/23 07:46:26 [DEBUG] serf: messageLeaveType: "Node 81"
2016/06/23 07:46:26 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:26 [INFO] raft: Node at 127.0.0.1:18080 [Candidate] entering Candidate state
2016/06/23 07:46:27 [DEBUG] serf: messageLeaveType: "Node 81"
2016/06/23 07:46:27 [INFO] serf: EventMemberLeave: "Node 81" 127.0.0.1
2016/06/23 07:46:27 [INFO] consul: removing LAN server "Node 81" (Addr: 127.0.0.1:8300) (DC: dc1)
2016/06/23 07:46:27 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:27 [DEBUG] raft: Vote granted from 127.0.0.1:18080. Tally: 1
2016/06/23 07:46:27 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:27 [INFO] raft: Node at 127.0.0.1:18080 [Candidate] entering Candidate state
2016/06/23 07:46:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:8300: EOF
2016/06/23 07:46:27 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:8300: EOF
2016/06/23 07:46:27 [INFO] agent: requesting shutdown
2016/06/23 07:46:27 [INFO] consul: shutting down server
2016/06/23 07:46:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:28 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:28 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:28 [INFO] agent: shutdown complete
--- PASS: TestRetryJoin (4.86s)
=== RUN   TestReadCliConfig
--- PASS: TestReadCliConfig (0.00s)
=== RUN   TestRetryJoinFail
--- PASS: TestRetryJoinFail (0.00s)
=== RUN   TestRetryJoinWanFail
--- PASS: TestRetryJoinWanFail (0.00s)
=== RUN   TestSetupAgent_RPCUnixSocket_FileExists
--- PASS: TestSetupAgent_RPCUnixSocket_FileExists (0.70s)
=== RUN   TestSetupScadaConn
2016/06/23 07:46:29 [INFO] raft: Node at 127.0.0.1:18085 [Follower] entering Follower state
2016/06/23 07:46:29 [INFO] serf: EventMemberJoin: Node 85 127.0.0.1
2016/06/23 07:46:29 [INFO] consul: adding LAN server Node 85 (Addr: 127.0.0.1:18085) (DC: dc1)
2016/06/23 07:46:29 [INFO] serf: EventMemberJoin: Node 85.dc1 127.0.0.1
2016/06/23 07:46:29 [INFO] consul: adding WAN server Node 85.dc1 (Addr: 127.0.0.1:18085) (DC: dc1)
2016/06/23 07:46:29 [DEBUG] http: Shutting down http server (SCADA)
2016/06/23 07:46:29 [INFO] agent: requesting shutdown
2016/06/23 07:46:29 [INFO] consul: shutting down server
2016/06/23 07:46:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:29 [INFO] raft: Node at 127.0.0.1:18085 [Candidate] entering Candidate state
2016/06/23 07:46:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:30 [INFO] agent: shutdown complete
--- PASS: TestSetupScadaConn (1.10s)
=== RUN   TestProtectDataDir
--- PASS: TestProtectDataDir (0.00s)
=== RUN   TestConfigEncryptBytes
--- PASS: TestConfigEncryptBytes (0.00s)
=== RUN   TestDecodeConfig
--- PASS: TestDecodeConfig (0.11s)
=== RUN   TestDecodeConfig_invalidKeys
--- PASS: TestDecodeConfig_invalidKeys (0.00s)
=== RUN   TestDecodeConfig_Services
--- PASS: TestDecodeConfig_Services (0.01s)
=== RUN   TestDecodeConfig_Checks
--- PASS: TestDecodeConfig_Checks (0.00s)
=== RUN   TestDecodeConfig_Multiples
--- PASS: TestDecodeConfig_Multiples (0.00s)
=== RUN   TestDecodeConfig_Service
--- PASS: TestDecodeConfig_Service (0.00s)
=== RUN   TestDecodeConfig_Check
--- PASS: TestDecodeConfig_Check (0.00s)
=== RUN   TestMergeConfig
--- PASS: TestMergeConfig (0.00s)
=== RUN   TestReadConfigPaths_badPath
--- PASS: TestReadConfigPaths_badPath (0.00s)
=== RUN   TestReadConfigPaths_file
--- PASS: TestReadConfigPaths_file (0.00s)
=== RUN   TestReadConfigPaths_dir
--- PASS: TestReadConfigPaths_dir (0.01s)
=== RUN   TestUnixSockets
--- PASS: TestUnixSockets (0.00s)
=== RUN   TestCoordinate_Datacenters
2016/06/23 07:46:30 [ERR] scada-client: failed to handshake: invalid token
2016/06/23 07:46:30 [ERR] scada-client: failed to handshake: invalid token
2016/06/23 07:46:30 [INFO] raft: Node at 127.0.0.1:18087 [Follower] entering Follower state
2016/06/23 07:46:30 [INFO] serf: EventMemberJoin: Node 87 127.0.0.1
2016/06/23 07:46:30 [INFO] consul: adding LAN server Node 87 (Addr: 127.0.0.1:18087) (DC: dc1)
2016/06/23 07:46:30 [INFO] serf: EventMemberJoin: Node 87.dc1 127.0.0.1
2016/06/23 07:46:30 [INFO] consul: adding WAN server Node 87.dc1 (Addr: 127.0.0.1:18087) (DC: dc1)
2016/06/23 07:46:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:30 [INFO] raft: Node at 127.0.0.1:18087 [Candidate] entering Candidate state
2016/06/23 07:46:31 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:31 [DEBUG] raft: Vote granted from 127.0.0.1:18087. Tally: 1
2016/06/23 07:46:31 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:31 [INFO] raft: Node at 127.0.0.1:18087 [Leader] entering Leader state
2016/06/23 07:46:31 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:31 [INFO] consul: New leader elected: Node 87
2016/06/23 07:46:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:31 [DEBUG] raft: Node 127.0.0.1:18087 updated peer set (2): [127.0.0.1:18087]
2016/06/23 07:46:31 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:32 [INFO] consul: member 'Node 87' joined, marking health alive
2016/06/23 07:46:32 [INFO] agent: requesting shutdown
2016/06/23 07:46:32 [INFO] consul: shutting down server
2016/06/23 07:46:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:32 [INFO] agent: shutdown complete
2016/06/23 07:46:32 [DEBUG] http: Shutting down http server (127.0.0.1:18887)
--- PASS: TestCoordinate_Datacenters (2.36s)
=== RUN   TestCoordinate_Nodes
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:18088 [Follower] entering Follower state
2016/06/23 07:46:33 [INFO] serf: EventMemberJoin: Node 88 127.0.0.1
2016/06/23 07:46:33 [INFO] consul: adding LAN server Node 88 (Addr: 127.0.0.1:18088) (DC: dc1)
2016/06/23 07:46:33 [INFO] serf: EventMemberJoin: Node 88.dc1 127.0.0.1
2016/06/23 07:46:33 [INFO] consul: adding WAN server Node 88.dc1 (Addr: 127.0.0.1:18088) (DC: dc1)
2016/06/23 07:46:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:18088 [Candidate] entering Candidate state
2016/06/23 07:46:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:33 [DEBUG] raft: Vote granted from 127.0.0.1:18088. Tally: 1
2016/06/23 07:46:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:18088 [Leader] entering Leader state
2016/06/23 07:46:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:33 [INFO] consul: New leader elected: Node 88
2016/06/23 07:46:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:33 [DEBUG] raft: Node 127.0.0.1:18088 updated peer set (2): [127.0.0.1:18088]
2016/06/23 07:46:34 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:34 [INFO] consul: member 'Node 88' joined, marking health alive
2016/06/23 07:46:34 [INFO] agent: requesting shutdown
2016/06/23 07:46:34 [INFO] consul: shutting down server
2016/06/23 07:46:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:35 [WARN] consul.coordinate: Batch update failed: leadership lost while committing log
2016/06/23 07:46:35 [INFO] agent: shutdown complete
2016/06/23 07:46:35 [DEBUG] http: Shutting down http server (127.0.0.1:18888)
--- FAIL: TestCoordinate_Nodes (2.62s)
	coordinate_endpoint_test.go:120: bad: []
=== RUN   TestRecursorAddr
--- PASS: TestRecursorAddr (0.00s)
=== RUN   TestDNS_NodeLookup
2016/06/23 07:46:36 [INFO] raft: Node at 127.0.0.1:18089 [Follower] entering Follower state
2016/06/23 07:46:36 [INFO] serf: EventMemberJoin: Node 89 127.0.0.1
2016/06/23 07:46:36 [INFO] consul: adding LAN server Node 89 (Addr: 127.0.0.1:18089) (DC: dc1)
2016/06/23 07:46:36 [INFO] serf: EventMemberJoin: Node 89.dc1 127.0.0.1
2016/06/23 07:46:36 [INFO] consul: adding WAN server Node 89.dc1 (Addr: 127.0.0.1:18089) (DC: dc1)
2016/06/23 07:46:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:36 [INFO] raft: Node at 127.0.0.1:18089 [Candidate] entering Candidate state
2016/06/23 07:46:37 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:37 [DEBUG] raft: Vote granted from 127.0.0.1:18089. Tally: 1
2016/06/23 07:46:37 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:37 [INFO] raft: Node at 127.0.0.1:18089 [Leader] entering Leader state
2016/06/23 07:46:37 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:37 [INFO] consul: New leader elected: Node 89
2016/06/23 07:46:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:37 [DEBUG] raft: Node 127.0.0.1:18089 updated peer set (2): [127.0.0.1:18089]
2016/06/23 07:46:37 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:38 [INFO] consul: member 'Node 89' joined, marking health alive
2016/06/23 07:46:38 [DEBUG] dns: request for {foo.node.consul. 255 1} (1.084033ms) from client 127.0.0.1:48341 (udp)
2016/06/23 07:46:38 [DEBUG] dns: request for {foo.node.dc1.consul. 255 1} (501.682µs) from client 127.0.0.1:44317 (udp)
2016/06/23 07:46:38 [INFO] agent: requesting shutdown
2016/06/23 07:46:38 [INFO] consul: shutting down server
2016/06/23 07:46:38 [DEBUG] dns: request for {nofoo.node.dc1.consul. 255 1} (405.679µs) from client 127.0.0.1:47527 (udp)
2016/06/23 07:46:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:38 [INFO] agent: shutdown complete
--- PASS: TestDNS_NodeLookup (3.47s)
=== RUN   TestDNS_CaseInsensitiveNodeLookup
2016/06/23 07:46:39 [INFO] raft: Node at 127.0.0.1:18090 [Follower] entering Follower state
2016/06/23 07:46:39 [INFO] serf: EventMemberJoin: Node 90 127.0.0.1
2016/06/23 07:46:39 [INFO] consul: adding LAN server Node 90 (Addr: 127.0.0.1:18090) (DC: dc1)
2016/06/23 07:46:39 [INFO] serf: EventMemberJoin: Node 90.dc1 127.0.0.1
2016/06/23 07:46:39 [INFO] consul: adding WAN server Node 90.dc1 (Addr: 127.0.0.1:18090) (DC: dc1)
2016/06/23 07:46:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:39 [INFO] raft: Node at 127.0.0.1:18090 [Candidate] entering Candidate state
2016/06/23 07:46:40 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:40 [DEBUG] raft: Vote granted from 127.0.0.1:18090. Tally: 1
2016/06/23 07:46:40 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:40 [INFO] raft: Node at 127.0.0.1:18090 [Leader] entering Leader state
2016/06/23 07:46:40 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:40 [INFO] consul: New leader elected: Node 90
2016/06/23 07:46:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:40 [DEBUG] raft: Node 127.0.0.1:18090 updated peer set (2): [127.0.0.1:18090]
2016/06/23 07:46:40 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:41 [INFO] consul: member 'Node 90' joined, marking health alive
2016/06/23 07:46:41 [DEBUG] dns: request for {fOO.node.dc1.consul. 255 1} (897.695µs) from client 127.0.0.1:39624 (udp)
2016/06/23 07:46:41 [INFO] agent: requesting shutdown
2016/06/23 07:46:41 [INFO] consul: shutting down server
2016/06/23 07:46:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:41 [INFO] agent: shutdown complete
--- PASS: TestDNS_CaseInsensitiveNodeLookup (2.98s)
=== RUN   TestDNS_NodeLookup_PeriodName
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:18091 [Follower] entering Follower state
2016/06/23 07:46:42 [INFO] serf: EventMemberJoin: Node 91 127.0.0.1
2016/06/23 07:46:42 [INFO] consul: adding LAN server Node 91 (Addr: 127.0.0.1:18091) (DC: dc1)
2016/06/23 07:46:42 [INFO] serf: EventMemberJoin: Node 91.dc1 127.0.0.1
2016/06/23 07:46:42 [INFO] consul: adding WAN server Node 91.dc1 (Addr: 127.0.0.1:18091) (DC: dc1)
2016/06/23 07:46:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:18091 [Candidate] entering Candidate state
2016/06/23 07:46:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:42 [DEBUG] raft: Vote granted from 127.0.0.1:18091. Tally: 1
2016/06/23 07:46:42 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:18091 [Leader] entering Leader state
2016/06/23 07:46:42 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:42 [INFO] consul: New leader elected: Node 91
2016/06/23 07:46:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:43 [DEBUG] raft: Node 127.0.0.1:18091 updated peer set (2): [127.0.0.1:18091]
2016/06/23 07:46:43 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:43 [INFO] consul: member 'Node 91' joined, marking health alive
2016/06/23 07:46:43 [DEBUG] dns: request for {foo.bar.node.consul. 255 1} (925.695µs) from client 127.0.0.1:38750 (udp)
2016/06/23 07:46:43 [INFO] agent: requesting shutdown
2016/06/23 07:46:43 [INFO] consul: shutting down server
2016/06/23 07:46:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:43 [INFO] agent: shutdown complete
--- PASS: TestDNS_NodeLookup_PeriodName (2.32s)
=== RUN   TestDNS_NodeLookup_AAAA
2016/06/23 07:46:44 [INFO] raft: Node at 127.0.0.1:18092 [Follower] entering Follower state
2016/06/23 07:46:44 [INFO] serf: EventMemberJoin: Node 92 127.0.0.1
2016/06/23 07:46:44 [INFO] serf: EventMemberJoin: Node 92.dc1 127.0.0.1
2016/06/23 07:46:44 [INFO] consul: adding LAN server Node 92 (Addr: 127.0.0.1:18092) (DC: dc1)
2016/06/23 07:46:44 [INFO] consul: adding WAN server Node 92.dc1 (Addr: 127.0.0.1:18092) (DC: dc1)
2016/06/23 07:46:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:44 [INFO] raft: Node at 127.0.0.1:18092 [Candidate] entering Candidate state
2016/06/23 07:46:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:45 [DEBUG] raft: Vote granted from 127.0.0.1:18092. Tally: 1
2016/06/23 07:46:45 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:45 [INFO] raft: Node at 127.0.0.1:18092 [Leader] entering Leader state
2016/06/23 07:46:45 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:45 [INFO] consul: New leader elected: Node 92
2016/06/23 07:46:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:45 [DEBUG] raft: Node 127.0.0.1:18092 updated peer set (2): [127.0.0.1:18092]
2016/06/23 07:46:45 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:45 [INFO] consul: member 'Node 92' joined, marking health alive
2016/06/23 07:46:46 [DEBUG] dns: request for {bar.node.consul. 28 1} (893.694µs) from client 127.0.0.1:48355 (udp)
2016/06/23 07:46:46 [INFO] agent: requesting shutdown
2016/06/23 07:46:46 [INFO] consul: shutting down server
2016/06/23 07:46:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:46 [INFO] agent: shutdown complete
--- PASS: TestDNS_NodeLookup_AAAA (2.61s)
=== RUN   TestDNS_NodeLookup_CNAME
2016/06/23 07:46:47 [INFO] raft: Node at 127.0.0.1:18094 [Follower] entering Follower state
2016/06/23 07:46:47 [INFO] serf: EventMemberJoin: Node 94 127.0.0.1
2016/06/23 07:46:47 [INFO] consul: adding LAN server Node 94 (Addr: 127.0.0.1:18094) (DC: dc1)
2016/06/23 07:46:47 [INFO] serf: EventMemberJoin: Node 94.dc1 127.0.0.1
2016/06/23 07:46:47 [INFO] consul: adding WAN server Node 94.dc1 (Addr: 127.0.0.1:18094) (DC: dc1)
2016/06/23 07:46:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:47 [INFO] raft: Node at 127.0.0.1:18094 [Candidate] entering Candidate state
2016/06/23 07:46:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:47 [DEBUG] raft: Vote granted from 127.0.0.1:18094. Tally: 1
2016/06/23 07:46:47 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:47 [INFO] raft: Node at 127.0.0.1:18094 [Leader] entering Leader state
2016/06/23 07:46:47 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:47 [INFO] consul: New leader elected: Node 94
2016/06/23 07:46:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:48 [DEBUG] raft: Node 127.0.0.1:18094 updated peer set (2): [127.0.0.1:18094]
2016/06/23 07:46:48 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:48 [INFO] consul: member 'Node 94' joined, marking health alive
2016/06/23 07:46:49 [DEBUG] dns: cname recurse RTT for www.google.com. (522.683µs)
2016/06/23 07:46:49 [DEBUG] dns: request for {google.node.consul. 255 1} (2.024062ms) from client 127.0.0.1:47160 (udp)
2016/06/23 07:46:49 [INFO] agent: requesting shutdown
2016/06/23 07:46:49 [INFO] consul: shutting down server
2016/06/23 07:46:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:49 [INFO] agent: shutdown complete
--- PASS: TestDNS_NodeLookup_CNAME (2.94s)
=== RUN   TestDNS_ReverseLookup
2016/06/23 07:46:50 [INFO] raft: Node at 127.0.0.1:18095 [Follower] entering Follower state
2016/06/23 07:46:50 [INFO] serf: EventMemberJoin: Node 95 127.0.0.1
2016/06/23 07:46:50 [INFO] consul: adding LAN server Node 95 (Addr: 127.0.0.1:18095) (DC: dc1)
2016/06/23 07:46:50 [INFO] serf: EventMemberJoin: Node 95.dc1 127.0.0.1
2016/06/23 07:46:50 [INFO] consul: adding WAN server Node 95.dc1 (Addr: 127.0.0.1:18095) (DC: dc1)
2016/06/23 07:46:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:50 [INFO] raft: Node at 127.0.0.1:18095 [Candidate] entering Candidate state
2016/06/23 07:46:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:50 [DEBUG] raft: Vote granted from 127.0.0.1:18095. Tally: 1
2016/06/23 07:46:50 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:50 [INFO] raft: Node at 127.0.0.1:18095 [Leader] entering Leader state
2016/06/23 07:46:50 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:50 [INFO] consul: New leader elected: Node 95
2016/06/23 07:46:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:51 [DEBUG] raft: Node 127.0.0.1:18095 updated peer set (2): [127.0.0.1:18095]
2016/06/23 07:46:51 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:51 [INFO] consul: member 'Node 95' joined, marking health alive
2016/06/23 07:46:51 [INFO] agent: requesting shutdown
2016/06/23 07:46:51 [INFO] consul: shutting down server
2016/06/23 07:46:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:51 [DEBUG] dns: request for {2.0.0.127.in-addr.arpa. 255 1} (412.346µs) from client 127.0.0.1:58905 (udp)
2016/06/23 07:46:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:52 [INFO] agent: shutdown complete
--- PASS: TestDNS_ReverseLookup (2.57s)
=== RUN   TestDNS_ReverseLookup_CustomDomain
2016/06/23 07:46:52 [INFO] raft: Node at 127.0.0.1:18096 [Follower] entering Follower state
2016/06/23 07:46:52 [INFO] serf: EventMemberJoin: Node 96 127.0.0.1
2016/06/23 07:46:52 [INFO] consul: adding LAN server Node 96 (Addr: 127.0.0.1:18096) (DC: dc1)
2016/06/23 07:46:52 [INFO] serf: EventMemberJoin: Node 96.dc1 127.0.0.1
2016/06/23 07:46:52 [INFO] consul: adding WAN server Node 96.dc1 (Addr: 127.0.0.1:18096) (DC: dc1)
2016/06/23 07:46:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:52 [INFO] raft: Node at 127.0.0.1:18096 [Candidate] entering Candidate state
2016/06/23 07:46:53 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:53 [DEBUG] raft: Vote granted from 127.0.0.1:18096. Tally: 1
2016/06/23 07:46:53 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:53 [INFO] raft: Node at 127.0.0.1:18096 [Leader] entering Leader state
2016/06/23 07:46:53 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:53 [INFO] consul: New leader elected: Node 96
2016/06/23 07:46:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:53 [DEBUG] raft: Node 127.0.0.1:18096 updated peer set (2): [127.0.0.1:18096]
2016/06/23 07:46:53 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:53 [INFO] consul: member 'Node 96' joined, marking health alive
2016/06/23 07:46:54 [INFO] agent: requesting shutdown
2016/06/23 07:46:54 [INFO] consul: shutting down server
2016/06/23 07:46:54 [DEBUG] dns: request for {2.0.0.127.in-addr.arpa. 255 1} (412.68µs) from client 127.0.0.1:50120 (udp)
2016/06/23 07:46:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:54 [INFO] agent: shutdown complete
--- PASS: TestDNS_ReverseLookup_CustomDomain (2.38s)
=== RUN   TestDNS_ReverseLookup_IPV6
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:18097 [Follower] entering Follower state
2016/06/23 07:46:55 [INFO] serf: EventMemberJoin: Node 97 127.0.0.1
2016/06/23 07:46:55 [INFO] consul: adding LAN server Node 97 (Addr: 127.0.0.1:18097) (DC: dc1)
2016/06/23 07:46:55 [INFO] serf: EventMemberJoin: Node 97.dc1 127.0.0.1
2016/06/23 07:46:55 [INFO] consul: adding WAN server Node 97.dc1 (Addr: 127.0.0.1:18097) (DC: dc1)
2016/06/23 07:46:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:18097 [Candidate] entering Candidate state
2016/06/23 07:46:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:55 [DEBUG] raft: Vote granted from 127.0.0.1:18097. Tally: 1
2016/06/23 07:46:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:18097 [Leader] entering Leader state
2016/06/23 07:46:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:55 [INFO] consul: New leader elected: Node 97
2016/06/23 07:46:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:55 [DEBUG] raft: Node 127.0.0.1:18097 updated peer set (2): [127.0.0.1:18097]
2016/06/23 07:46:55 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:56 [INFO] consul: member 'Node 97' joined, marking health alive
2016/06/23 07:46:56 [DEBUG] dns: request for {2.4.2.4.2.4.2.4.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa. 255 1} (488.349µs) from client 127.0.0.1:45010 (udp)
2016/06/23 07:46:56 [INFO] agent: requesting shutdown
2016/06/23 07:46:56 [INFO] consul: shutting down server
2016/06/23 07:46:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:56 [INFO] agent: shutdown complete
--- PASS: TestDNS_ReverseLookup_IPV6 (2.29s)
=== RUN   TestDNS_ServiceLookup
2016/06/23 07:46:57 [INFO] raft: Node at 127.0.0.1:18098 [Follower] entering Follower state
2016/06/23 07:46:57 [INFO] serf: EventMemberJoin: Node 98 127.0.0.1
2016/06/23 07:46:57 [INFO] serf: EventMemberJoin: Node 98.dc1 127.0.0.1
2016/06/23 07:46:57 [INFO] consul: adding LAN server Node 98 (Addr: 127.0.0.1:18098) (DC: dc1)
2016/06/23 07:46:57 [INFO] consul: adding WAN server Node 98.dc1 (Addr: 127.0.0.1:18098) (DC: dc1)
2016/06/23 07:46:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:57 [INFO] raft: Node at 127.0.0.1:18098 [Candidate] entering Candidate state
2016/06/23 07:46:58 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:58 [DEBUG] raft: Vote granted from 127.0.0.1:18098. Tally: 1
2016/06/23 07:46:58 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:58 [INFO] raft: Node at 127.0.0.1:18098 [Leader] entering Leader state
2016/06/23 07:46:58 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:58 [INFO] consul: New leader elected: Node 98
2016/06/23 07:46:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:58 [DEBUG] raft: Node 127.0.0.1:18098 updated peer set (2): [127.0.0.1:18098]
2016/06/23 07:46:58 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:58 [INFO] consul: member 'Node 98' joined, marking health alive
2016/06/23 07:46:59 [DEBUG] dns: request for {db.service.consul. 33 1} (873.36µs) from client 127.0.0.1:49650 (udp)
2016/06/23 07:46:59 [DEBUG] dns: request for {6bb43be6-8666-dada-e62d-99cd55b5950a.query.consul. 33 1} (659.02µs) from client 127.0.0.1:33540 (udp)
2016/06/23 07:46:59 [DEBUG] dns: request for {nodb.service.consul. 33 1} (612.352µs) from client 127.0.0.1:53860 (udp)
2016/06/23 07:46:59 [INFO] agent: requesting shutdown
2016/06/23 07:46:59 [INFO] consul: shutting down server
2016/06/23 07:46:59 [DEBUG] dns: request for {nope.query.consul. 33 1} (367.678µs) from client 127.0.0.1:33957 (udp)
2016/06/23 07:46:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:59 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup (3.22s)
=== RUN   TestDNS_ServiceLookup_ServiceAddress
2016/06/23 07:47:00 [INFO] raft: Node at 127.0.0.1:18099 [Follower] entering Follower state
2016/06/23 07:47:00 [INFO] serf: EventMemberJoin: Node 99 127.0.0.1
2016/06/23 07:47:00 [INFO] consul: adding LAN server Node 99 (Addr: 127.0.0.1:18099) (DC: dc1)
2016/06/23 07:47:00 [INFO] serf: EventMemberJoin: Node 99.dc1 127.0.0.1
2016/06/23 07:47:00 [INFO] consul: adding WAN server Node 99.dc1 (Addr: 127.0.0.1:18099) (DC: dc1)
2016/06/23 07:47:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:00 [INFO] raft: Node at 127.0.0.1:18099 [Candidate] entering Candidate state
2016/06/23 07:47:01 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:01 [DEBUG] raft: Vote granted from 127.0.0.1:18099. Tally: 1
2016/06/23 07:47:01 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:01 [INFO] raft: Node at 127.0.0.1:18099 [Leader] entering Leader state
2016/06/23 07:47:01 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:01 [INFO] consul: New leader elected: Node 99
2016/06/23 07:47:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:01 [DEBUG] raft: Node 127.0.0.1:18099 updated peer set (2): [127.0.0.1:18099]
2016/06/23 07:47:01 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:02 [INFO] consul: member 'Node 99' joined, marking health alive
2016/06/23 07:47:03 [DEBUG] dns: request for {db.service.consul. 33 1} (848.692µs) from client 127.0.0.1:37178 (udp)
2016/06/23 07:47:03 [DEBUG] dns: request for {44dfa9ab-024f-4129-2dff-998b8cceccd5.query.consul. 33 1} (687.354µs) from client 127.0.0.1:33476 (udp)
2016/06/23 07:47:03 [INFO] agent: requesting shutdown
2016/06/23 07:47:03 [INFO] consul: shutting down server
2016/06/23 07:47:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:03 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_ServiceAddress (3.33s)
=== RUN   TestDNS_ServiceLookup_WanAddress
2016/06/23 07:47:03 [INFO] raft: Node at 127.0.0.1:18100 [Follower] entering Follower state
2016/06/23 07:47:03 [INFO] serf: EventMemberJoin: Node 100 127.0.0.1
2016/06/23 07:47:03 [INFO] consul: adding LAN server Node 100 (Addr: 127.0.0.1:18100) (DC: dc1)
2016/06/23 07:47:03 [INFO] serf: EventMemberJoin: Node 100.dc1 127.0.0.1
2016/06/23 07:47:03 [INFO] consul: adding WAN server Node 100.dc1 (Addr: 127.0.0.1:18100) (DC: dc1)
2016/06/23 07:47:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:03 [INFO] raft: Node at 127.0.0.1:18100 [Candidate] entering Candidate state
2016/06/23 07:47:04 [INFO] raft: Node at 127.0.0.1:18101 [Follower] entering Follower state
2016/06/23 07:47:04 [INFO] serf: EventMemberJoin: Node 101 127.0.0.1
2016/06/23 07:47:04 [INFO] consul: adding LAN server Node 101 (Addr: 127.0.0.1:18101) (DC: dc2)
2016/06/23 07:47:04 [INFO] serf: EventMemberJoin: Node 101.dc2 127.0.0.1
2016/06/23 07:47:04 [INFO] consul: adding WAN server Node 101.dc2 (Addr: 127.0.0.1:18101) (DC: dc2)
2016/06/23 07:47:04 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:04 [DEBUG] raft: Vote granted from 127.0.0.1:18100. Tally: 1
2016/06/23 07:47:04 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:04 [INFO] raft: Node at 127.0.0.1:18100 [Leader] entering Leader state
2016/06/23 07:47:04 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:04 [INFO] raft: Node at 127.0.0.1:18101 [Candidate] entering Candidate state
2016/06/23 07:47:04 [INFO] consul: New leader elected: Node 100
2016/06/23 07:47:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:05 [DEBUG] raft: Node 127.0.0.1:18100 updated peer set (2): [127.0.0.1:18100]
2016/06/23 07:47:05 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:05 [DEBUG] raft: Vote granted from 127.0.0.1:18101. Tally: 1
2016/06/23 07:47:05 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:05 [INFO] raft: Node at 127.0.0.1:18101 [Leader] entering Leader state
2016/06/23 07:47:05 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:05 [INFO] consul: New leader elected: Node 101
2016/06/23 07:47:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:05 [DEBUG] raft: Node 127.0.0.1:18101 updated peer set (2): [127.0.0.1:18101]
2016/06/23 07:47:05 [INFO] consul: member 'Node 100' joined, marking health alive
2016/06/23 07:47:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:06 [INFO] consul: member 'Node 101' joined, marking health alive
2016/06/23 07:47:06 [INFO] agent: (WAN) joining: [127.0.0.1:18500]
2016/06/23 07:47:06 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:47:06 [DEBUG] memberlist: TCP connection from=127.0.0.1:49044
2016/06/23 07:47:06 [INFO] serf: EventMemberJoin: Node 101.dc2 127.0.0.1
2016/06/23 07:47:06 [INFO] consul: adding WAN server Node 101.dc2 (Addr: 127.0.0.1:18101) (DC: dc2)
2016/06/23 07:47:06 [INFO] serf: EventMemberJoin: Node 100.dc1 127.0.0.1
2016/06/23 07:47:06 [INFO] agent: (WAN) joined: 1 Err: <nil>
2016/06/23 07:47:06 [INFO] consul: adding WAN server Node 100.dc1 (Addr: 127.0.0.1:18100) (DC: dc1)
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:06 [DEBUG] serf: messageJoinType: Node 101.dc2
2016/06/23 07:47:07 [DEBUG] dns: request for {db.service.dc2.consul. 33 1} (5.128824ms) from client 127.0.0.1:33099 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {6757783c-4f13-fc35-3a9c-f070ad318c41.query.dc2.consul. 33 1} (2.458408ms) from client 127.0.0.1:41385 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {db.service.dc2.consul. 1 1} (3.432772ms) from client 127.0.0.1:38585 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {6757783c-4f13-fc35-3a9c-f070ad318c41.query.dc2.consul. 1 1} (2.057396ms) from client 127.0.0.1:37903 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {db.service.dc2.consul. 33 1} (769.024µs) from client 127.0.0.1:56046 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {6757783c-4f13-fc35-3a9c-f070ad318c41.query.dc2.consul. 33 1} (614.019µs) from client 127.0.0.1:44469 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {db.service.dc2.consul. 1 1} (689.021µs) from client 127.0.0.1:48374 (udp)
2016/06/23 07:47:07 [DEBUG] dns: request for {6757783c-4f13-fc35-3a9c-f070ad318c41.query.dc2.consul. 1 1} (518.349µs) from client 127.0.0.1:35352 (udp)
2016/06/23 07:47:07 [ERR] dns: error starting tcp server: accept tcp 127.0.0.1:19101: use of closed network connection
2016/06/23 07:47:07 [ERR] dns: error starting tcp server: accept tcp 127.0.0.1:19100: use of closed network connection
--- PASS: TestDNS_ServiceLookup_WanAddress (3.88s)
=== RUN   TestDNS_CaseInsensitiveServiceLookup
2016/06/23 07:47:07 [DEBUG] memberlist: Potential blocking operation. Last command took 15.2448ms
2016/06/23 07:47:07 [INFO] raft: Node at 127.0.0.1:18102 [Follower] entering Follower state
2016/06/23 07:47:07 [INFO] serf: EventMemberJoin: Node 102 127.0.0.1
2016/06/23 07:47:07 [INFO] consul: adding LAN server Node 102 (Addr: 127.0.0.1:18102) (DC: dc1)
2016/06/23 07:47:07 [INFO] serf: EventMemberJoin: Node 102.dc1 127.0.0.1
2016/06/23 07:47:07 [INFO] consul: adding WAN server Node 102.dc1 (Addr: 127.0.0.1:18102) (DC: dc1)
2016/06/23 07:47:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:07 [INFO] raft: Node at 127.0.0.1:18102 [Candidate] entering Candidate state
2016/06/23 07:47:08 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:08 [DEBUG] raft: Vote granted from 127.0.0.1:18102. Tally: 1
2016/06/23 07:47:08 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:08 [INFO] raft: Node at 127.0.0.1:18102 [Leader] entering Leader state
2016/06/23 07:47:08 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:08 [INFO] consul: New leader elected: Node 102
2016/06/23 07:47:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:09 [DEBUG] raft: Node 127.0.0.1:18102 updated peer set (2): [127.0.0.1:18102]
2016/06/23 07:47:09 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:09 [INFO] consul: member 'Node 102' joined, marking health alive
2016/06/23 07:47:10 [DEBUG] dns: request for {master.db.service.consul. 33 1} (1.012365ms) from client 127.0.0.1:36740 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {mASTER.dB.service.consul. 33 1} (898.028µs) from client 127.0.0.1:54525 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {MASTER.dB.service.consul. 33 1} (891.027µs) from client 127.0.0.1:51015 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {db.service.consul. 33 1} (812.358µs) from client 127.0.0.1:41654 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {DB.service.consul. 33 1} (788.691µs) from client 127.0.0.1:51172 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {Db.service.consul. 33 1} (791.69µs) from client 127.0.0.1:60070 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {somequery.query.consul. 33 1} (549.35µs) from client 127.0.0.1:46279 (udp)
2016/06/23 07:47:10 [DEBUG] dns: request for {SomeQuery.query.consul. 33 1} (549.35µs) from client 127.0.0.1:56736 (udp)
2016/06/23 07:47:10 [INFO] agent: requesting shutdown
2016/06/23 07:47:10 [INFO] consul: shutting down server
2016/06/23 07:47:10 [DEBUG] dns: request for {SOMEQUERY.query.consul. 33 1} (583.684µs) from client 127.0.0.1:33482 (udp)
2016/06/23 07:47:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:10 [INFO] agent: shutdown complete
--- PASS: TestDNS_CaseInsensitiveServiceLookup (3.66s)
=== RUN   TestDNS_ServiceLookup_TagPeriod
2016/06/23 07:47:11 [INFO] raft: Node at 127.0.0.1:18103 [Follower] entering Follower state
2016/06/23 07:47:11 [INFO] serf: EventMemberJoin: Node 103 127.0.0.1
2016/06/23 07:47:11 [INFO] consul: adding LAN server Node 103 (Addr: 127.0.0.1:18103) (DC: dc1)
2016/06/23 07:47:11 [INFO] serf: EventMemberJoin: Node 103.dc1 127.0.0.1
2016/06/23 07:47:11 [INFO] consul: adding WAN server Node 103.dc1 (Addr: 127.0.0.1:18103) (DC: dc1)
2016/06/23 07:47:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:11 [INFO] raft: Node at 127.0.0.1:18103 [Candidate] entering Candidate state
2016/06/23 07:47:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:12 [DEBUG] raft: Vote granted from 127.0.0.1:18103. Tally: 1
2016/06/23 07:47:12 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:12 [INFO] raft: Node at 127.0.0.1:18103 [Leader] entering Leader state
2016/06/23 07:47:12 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:12 [INFO] consul: New leader elected: Node 103
2016/06/23 07:47:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:12 [DEBUG] raft: Node 127.0.0.1:18103 updated peer set (2): [127.0.0.1:18103]
2016/06/23 07:47:12 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:12 [INFO] consul: member 'Node 103' joined, marking health alive
2016/06/23 07:47:13 [DEBUG] dns: request for {v1.master.db.service.consul. 33 1} (960.696µs) from client 127.0.0.1:34080 (udp)
2016/06/23 07:47:13 [INFO] agent: requesting shutdown
2016/06/23 07:47:13 [INFO] consul: shutting down server
2016/06/23 07:47:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:13 [DEBUG] memberlist: Potential blocking operation. Last command took 23.947733ms
2016/06/23 07:47:13 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_TagPeriod (2.75s)
=== RUN   TestDNS_ServiceLookup_PreparedQueryNamePeriod
2016/06/23 07:47:14 [INFO] raft: Node at 127.0.0.1:18104 [Follower] entering Follower state
2016/06/23 07:47:14 [INFO] serf: EventMemberJoin: Node 104 127.0.0.1
2016/06/23 07:47:14 [INFO] consul: adding LAN server Node 104 (Addr: 127.0.0.1:18104) (DC: dc1)
2016/06/23 07:47:14 [INFO] serf: EventMemberJoin: Node 104.dc1 127.0.0.1
2016/06/23 07:47:14 [INFO] consul: adding WAN server Node 104.dc1 (Addr: 127.0.0.1:18104) (DC: dc1)
2016/06/23 07:47:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:14 [INFO] raft: Node at 127.0.0.1:18104 [Candidate] entering Candidate state
2016/06/23 07:47:14 [DEBUG] memberlist: Potential blocking operation. Last command took 22.893368ms
2016/06/23 07:47:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:15 [DEBUG] raft: Vote granted from 127.0.0.1:18104. Tally: 1
2016/06/23 07:47:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:15 [INFO] raft: Node at 127.0.0.1:18104 [Leader] entering Leader state
2016/06/23 07:47:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:15 [INFO] consul: New leader elected: Node 104
2016/06/23 07:47:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:15 [DEBUG] raft: Node 127.0.0.1:18104 updated peer set (2): [127.0.0.1:18104]
2016/06/23 07:47:15 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:15 [INFO] consul: member 'Node 104' joined, marking health alive
2016/06/23 07:47:16 [DEBUG] dns: request for {some.query.we.like.query.consul. 33 1} (683.02µs) from client 127.0.0.1:51112 (udp)
2016/06/23 07:47:16 [INFO] agent: requesting shutdown
2016/06/23 07:47:16 [INFO] consul: shutting down server
2016/06/23 07:47:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:16 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_PreparedQueryNamePeriod (3.20s)
=== RUN   TestDNS_ServiceLookup_Dedup
2016/06/23 07:47:17 [INFO] raft: Node at 127.0.0.1:18105 [Follower] entering Follower state
2016/06/23 07:47:17 [INFO] serf: EventMemberJoin: Node 105 127.0.0.1
2016/06/23 07:47:17 [INFO] consul: adding LAN server Node 105 (Addr: 127.0.0.1:18105) (DC: dc1)
2016/06/23 07:47:17 [INFO] serf: EventMemberJoin: Node 105.dc1 127.0.0.1
2016/06/23 07:47:17 [INFO] consul: adding WAN server Node 105.dc1 (Addr: 127.0.0.1:18105) (DC: dc1)
2016/06/23 07:47:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:17 [INFO] raft: Node at 127.0.0.1:18105 [Candidate] entering Candidate state
2016/06/23 07:47:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:18 [DEBUG] raft: Vote granted from 127.0.0.1:18105. Tally: 1
2016/06/23 07:47:18 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:18 [INFO] raft: Node at 127.0.0.1:18105 [Leader] entering Leader state
2016/06/23 07:47:18 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:18 [INFO] consul: New leader elected: Node 105
2016/06/23 07:47:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:19 [DEBUG] raft: Node 127.0.0.1:18105 updated peer set (2): [127.0.0.1:18105]
2016/06/23 07:47:19 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:19 [INFO] consul: member 'Node 105' joined, marking health alive
2016/06/23 07:47:20 [DEBUG] memberlist: Potential blocking operation. Last command took 24.212741ms
2016/06/23 07:47:21 [DEBUG] dns: request for {586b9050-1e74-2321-a7e0-b6b690d0f396.query.consul. 255 1} (625.686µs) from client 127.0.0.1:53591 (udp)
2016/06/23 07:47:21 [INFO] agent: requesting shutdown
2016/06/23 07:47:21 [INFO] consul: shutting down server
2016/06/23 07:47:21 [DEBUG] dns: request for {db.service.consul. 255 1} (879.027µs) from client 127.0.0.1:57060 (udp)
2016/06/23 07:47:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:21 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_Dedup (4.96s)
=== RUN   TestDNS_ServiceLookup_Dedup_SRV
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:18106 [Follower] entering Follower state
2016/06/23 07:47:22 [INFO] serf: EventMemberJoin: Node 106 127.0.0.1
2016/06/23 07:47:22 [INFO] consul: adding LAN server Node 106 (Addr: 127.0.0.1:18106) (DC: dc1)
2016/06/23 07:47:22 [INFO] serf: EventMemberJoin: Node 106.dc1 127.0.0.1
2016/06/23 07:47:22 [INFO] consul: adding WAN server Node 106.dc1 (Addr: 127.0.0.1:18106) (DC: dc1)
2016/06/23 07:47:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:18106 [Candidate] entering Candidate state
2016/06/23 07:47:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:22 [DEBUG] raft: Vote granted from 127.0.0.1:18106. Tally: 1
2016/06/23 07:47:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:18106 [Leader] entering Leader state
2016/06/23 07:47:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:22 [INFO] consul: New leader elected: Node 106
2016/06/23 07:47:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:23 [DEBUG] raft: Node 127.0.0.1:18106 updated peer set (2): [127.0.0.1:18106]
2016/06/23 07:47:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:23 [INFO] consul: member 'Node 106' joined, marking health alive
2016/06/23 07:47:25 [DEBUG] dns: request for {db.service.consul. 33 1} (1.0887ms) from client 127.0.0.1:37715 (udp)
2016/06/23 07:47:25 [DEBUG] dns: request for {2758788f-e053-c190-6e1c-378ff15bfaed.query.consul. 33 1} (776.024µs) from client 127.0.0.1:47139 (udp)
2016/06/23 07:47:25 [INFO] agent: requesting shutdown
2016/06/23 07:47:25 [INFO] consul: shutting down server
2016/06/23 07:47:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:25 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_Dedup_SRV (3.72s)
=== RUN   TestDNS_Recurse
2016/06/23 07:47:26 [INFO] raft: Node at 127.0.0.1:18108 [Follower] entering Follower state
2016/06/23 07:47:26 [INFO] serf: EventMemberJoin: Node 108 127.0.0.1
2016/06/23 07:47:26 [INFO] consul: adding LAN server Node 108 (Addr: 127.0.0.1:18108) (DC: dc1)
2016/06/23 07:47:26 [INFO] serf: EventMemberJoin: Node 108.dc1 127.0.0.1
2016/06/23 07:47:26 [INFO] consul: adding WAN server Node 108.dc1 (Addr: 127.0.0.1:18108) (DC: dc1)
2016/06/23 07:47:26 [DEBUG] dns: recurse RTT for {apple.com. 255 1} (644.02µs)
2016/06/23 07:47:26 [DEBUG] dns: request for {apple.com. 255 1} (udp) (1.515046ms) from client 127.0.0.1:54542 (udp)
2016/06/23 07:47:26 [INFO] agent: requesting shutdown
2016/06/23 07:47:26 [INFO] consul: shutting down server
2016/06/23 07:47:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:26 [INFO] raft: Node at 127.0.0.1:18108 [Candidate] entering Candidate state
2016/06/23 07:47:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:26 [INFO] agent: shutdown complete
--- PASS: TestDNS_Recurse (1.19s)
=== RUN   TestDNS_ServiceLookup_FilterCritical
2016/06/23 07:47:27 [DEBUG] memberlist: Potential blocking operation. Last command took 21.470657ms
2016/06/23 07:47:27 [INFO] raft: Node at 127.0.0.1:18109 [Follower] entering Follower state
2016/06/23 07:47:27 [INFO] serf: EventMemberJoin: Node 109 127.0.0.1
2016/06/23 07:47:27 [INFO] consul: adding LAN server Node 109 (Addr: 127.0.0.1:18109) (DC: dc1)
2016/06/23 07:47:27 [INFO] serf: EventMemberJoin: Node 109.dc1 127.0.0.1
2016/06/23 07:47:27 [INFO] consul: adding WAN server Node 109.dc1 (Addr: 127.0.0.1:18109) (DC: dc1)
2016/06/23 07:47:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:27 [INFO] raft: Node at 127.0.0.1:18109 [Candidate] entering Candidate state
2016/06/23 07:47:28 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:28 [DEBUG] raft: Vote granted from 127.0.0.1:18109. Tally: 1
2016/06/23 07:47:28 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:28 [INFO] raft: Node at 127.0.0.1:18109 [Leader] entering Leader state
2016/06/23 07:47:28 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:28 [INFO] consul: New leader elected: Node 109
2016/06/23 07:47:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:28 [DEBUG] memberlist: Potential blocking operation. Last command took 31.5983ms
2016/06/23 07:47:28 [DEBUG] raft: Node 127.0.0.1:18109 updated peer set (2): [127.0.0.1:18109]
2016/06/23 07:47:28 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:29 [INFO] consul: member 'Node 109' joined, marking health alive
2016/06/23 07:47:30 [DEBUG] memberlist: Potential blocking operation. Last command took 30.592603ms
2016/06/23 07:47:31 [DEBUG] dns: request for {db.service.consul. 255 1} (937.029µs) from client 127.0.0.1:39466 (udp)
2016/06/23 07:47:31 [INFO] agent: requesting shutdown
2016/06/23 07:47:31 [INFO] consul: shutting down server
2016/06/23 07:47:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:31 [DEBUG] dns: request for {7bc7da0b-1e3f-4aa8-9299-0c9b0a6ddbda.query.consul. 255 1} (702.688µs) from client 127.0.0.1:38624 (udp)
2016/06/23 07:47:31 [DEBUG] memberlist: Potential blocking operation. Last command took 29.682242ms
2016/06/23 07:47:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:31 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_FilterCritical (5.29s)
=== RUN   TestDNS_ServiceLookup_OnlyFailing
2016/06/23 07:47:33 [INFO] raft: Node at 127.0.0.1:18110 [Follower] entering Follower state
2016/06/23 07:47:33 [INFO] serf: EventMemberJoin: Node 110 127.0.0.1
2016/06/23 07:47:33 [INFO] consul: adding LAN server Node 110 (Addr: 127.0.0.1:18110) (DC: dc1)
2016/06/23 07:47:33 [INFO] serf: EventMemberJoin: Node 110.dc1 127.0.0.1
2016/06/23 07:47:33 [INFO] consul: adding WAN server Node 110.dc1 (Addr: 127.0.0.1:18110) (DC: dc1)
2016/06/23 07:47:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:33 [INFO] raft: Node at 127.0.0.1:18110 [Candidate] entering Candidate state
2016/06/23 07:47:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:33 [DEBUG] raft: Vote granted from 127.0.0.1:18110. Tally: 1
2016/06/23 07:47:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:33 [INFO] raft: Node at 127.0.0.1:18110 [Leader] entering Leader state
2016/06/23 07:47:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:33 [INFO] consul: New leader elected: Node 110
2016/06/23 07:47:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:34 [DEBUG] raft: Node 127.0.0.1:18110 updated peer set (2): [127.0.0.1:18110]
2016/06/23 07:47:34 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:34 [INFO] consul: member 'Node 110' joined, marking health alive
2016/06/23 07:47:36 [DEBUG] dns: request for {db.service.consul. 255 1} (863.027µs) from client 127.0.0.1:40793 (udp)
2016/06/23 07:47:36 [INFO] agent: requesting shutdown
2016/06/23 07:47:36 [INFO] consul: shutting down server
2016/06/23 07:47:36 [DEBUG] dns: request for {31654fd6-6c39-d32f-dc7e-70353d39a4f9.query.consul. 255 1} (651.02µs) from client 127.0.0.1:52480 (udp)
2016/06/23 07:47:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:36 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_OnlyFailing (4.30s)
=== RUN   TestDNS_ServiceLookup_OnlyPassing
2016/06/23 07:47:37 [INFO] raft: Node at 127.0.0.1:18111 [Follower] entering Follower state
2016/06/23 07:47:37 [INFO] serf: EventMemberJoin: Node 111 127.0.0.1
2016/06/23 07:47:37 [INFO] consul: adding LAN server Node 111 (Addr: 127.0.0.1:18111) (DC: dc1)
2016/06/23 07:47:37 [INFO] serf: EventMemberJoin: Node 111.dc1 127.0.0.1
2016/06/23 07:47:37 [INFO] consul: adding WAN server Node 111.dc1 (Addr: 127.0.0.1:18111) (DC: dc1)
2016/06/23 07:47:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:37 [INFO] raft: Node at 127.0.0.1:18111 [Candidate] entering Candidate state
2016/06/23 07:47:38 [DEBUG] memberlist: Potential blocking operation. Last command took 22.12301ms
2016/06/23 07:47:38 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:38 [DEBUG] raft: Vote granted from 127.0.0.1:18111. Tally: 1
2016/06/23 07:47:38 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:38 [INFO] raft: Node at 127.0.0.1:18111 [Leader] entering Leader state
2016/06/23 07:47:38 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:38 [INFO] consul: New leader elected: Node 111
2016/06/23 07:47:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:38 [DEBUG] raft: Node 127.0.0.1:18111 updated peer set (2): [127.0.0.1:18111]
2016/06/23 07:47:38 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:38 [INFO] consul: member 'Node 111' joined, marking health alive
2016/06/23 07:47:40 [DEBUG] dns: request for {db.service.consul. 255 1} (920.028µs) from client 127.0.0.1:42266 (udp)
2016/06/23 07:47:40 [INFO] agent: requesting shutdown
2016/06/23 07:47:40 [INFO] consul: shutting down server
2016/06/23 07:47:40 [DEBUG] dns: request for {698f0996-64a4-72eb-085b-a9fe78192117.query.consul. 255 1} (659.353µs) from client 127.0.0.1:46739 (udp)
2016/06/23 07:47:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:40 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_OnlyPassing (4.27s)
=== RUN   TestDNS_ServiceLookup_Randomize
2016/06/23 07:47:41 [INFO] raft: Node at 127.0.0.1:18112 [Follower] entering Follower state
2016/06/23 07:47:41 [INFO] serf: EventMemberJoin: Node 112 127.0.0.1
2016/06/23 07:47:41 [INFO] consul: adding LAN server Node 112 (Addr: 127.0.0.1:18112) (DC: dc1)
2016/06/23 07:47:41 [INFO] serf: EventMemberJoin: Node 112.dc1 127.0.0.1
2016/06/23 07:47:41 [INFO] consul: adding WAN server Node 112.dc1 (Addr: 127.0.0.1:18112) (DC: dc1)
2016/06/23 07:47:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:41 [INFO] raft: Node at 127.0.0.1:18112 [Candidate] entering Candidate state
2016/06/23 07:47:41 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:41 [DEBUG] raft: Vote granted from 127.0.0.1:18112. Tally: 1
2016/06/23 07:47:41 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:41 [INFO] raft: Node at 127.0.0.1:18112 [Leader] entering Leader state
2016/06/23 07:47:41 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:41 [INFO] consul: New leader elected: Node 112
2016/06/23 07:47:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:42 [DEBUG] raft: Node 127.0.0.1:18112 updated peer set (2): [127.0.0.1:18112]
2016/06/23 07:47:42 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:42 [INFO] consul: member 'Node 112' joined, marking health alive
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.250705ms) from client 127.0.0.1:51697 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.051365ms) from client 127.0.0.1:43290 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.809722ms) from client 127.0.0.1:53644 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.021698ms) from client 127.0.0.1:49961 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (984.03µs) from client 127.0.0.1:51388 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (978.696µs) from client 127.0.0.1:38578 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.015031ms) from client 127.0.0.1:33869 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.471711ms) from client 127.0.0.1:39243 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.019031ms) from client 127.0.0.1:42987 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {web.service.consul. 255 1} (1.067032ms) from client 127.0.0.1:44984 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (901.028µs) from client 127.0.0.1:59832 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (1.780721ms) from client 127.0.0.1:37763 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (2.567412ms) from client 127.0.0.1:54884 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (1.470712ms) from client 127.0.0.1:43712 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (812.359µs) from client 127.0.0.1:39847 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (866.027µs) from client 127.0.0.1:42608 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (830.026µs) from client 127.0.0.1:50833 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (954.362µs) from client 127.0.0.1:51378 (udp)
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (1.010364ms) from client 127.0.0.1:50067 (udp)
2016/06/23 07:47:46 [INFO] agent: requesting shutdown
2016/06/23 07:47:46 [INFO] consul: shutting down server
2016/06/23 07:47:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:46 [DEBUG] dns: request for {af8459bf-02aa-15b6-1bd3-69145c5c0496.query.consul. 255 1} (1.675718ms) from client 127.0.0.1:60755 (udp)
2016/06/23 07:47:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:46 [DEBUG] memberlist: Potential blocking operation. Last command took 46.453089ms
2016/06/23 07:47:46 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_Randomize (5.86s)
=== RUN   TestDNS_ServiceLookup_Truncate
2016/06/23 07:47:47 [INFO] raft: Node at 127.0.0.1:18113 [Follower] entering Follower state
2016/06/23 07:47:47 [INFO] serf: EventMemberJoin: Node 113 127.0.0.1
2016/06/23 07:47:47 [INFO] consul: adding LAN server Node 113 (Addr: 127.0.0.1:18113) (DC: dc1)
2016/06/23 07:47:47 [INFO] serf: EventMemberJoin: Node 113.dc1 127.0.0.1
2016/06/23 07:47:47 [INFO] consul: adding WAN server Node 113.dc1 (Addr: 127.0.0.1:18113) (DC: dc1)
2016/06/23 07:47:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:47 [INFO] raft: Node at 127.0.0.1:18113 [Candidate] entering Candidate state
2016/06/23 07:47:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:47 [DEBUG] raft: Vote granted from 127.0.0.1:18113. Tally: 1
2016/06/23 07:47:47 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:47 [INFO] raft: Node at 127.0.0.1:18113 [Leader] entering Leader state
2016/06/23 07:47:47 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:47 [INFO] consul: New leader elected: Node 113
2016/06/23 07:47:47 [DEBUG] memberlist: Potential blocking operation. Last command took 10.645659ms
2016/06/23 07:47:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:48 [DEBUG] raft: Node 127.0.0.1:18113 updated peer set (2): [127.0.0.1:18113]
2016/06/23 07:47:48 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:48 [INFO] consul: member 'Node 113' joined, marking health alive
2016/06/23 07:47:51 [DEBUG] dns: request for {web.service.consul. 255 1} (1.254371ms) from client 127.0.0.1:42398 (udp)
2016/06/23 07:47:51 [INFO] agent: requesting shutdown
2016/06/23 07:47:51 [DEBUG] dns: request for {e2b6ba49-bdb5-c8dc-12fd-b05f3b8d2d44.query.consul. 255 1} (926.028µs) from client 127.0.0.1:40715 (udp)
2016/06/23 07:47:51 [INFO] consul: shutting down server
2016/06/23 07:47:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:52 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_Truncate (5.82s)
=== RUN   TestDNS_ServiceLookup_LargeResponses
2016/06/23 07:47:52 [INFO] raft: Node at 127.0.0.1:18114 [Follower] entering Follower state
2016/06/23 07:47:52 [INFO] serf: EventMemberJoin: Node 114 127.0.0.1
2016/06/23 07:47:52 [INFO] consul: adding LAN server Node 114 (Addr: 127.0.0.1:18114) (DC: dc1)
2016/06/23 07:47:52 [INFO] serf: EventMemberJoin: Node 114.dc1 127.0.0.1
2016/06/23 07:47:52 [INFO] consul: adding WAN server Node 114.dc1 (Addr: 127.0.0.1:18114) (DC: dc1)
2016/06/23 07:47:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:53 [INFO] raft: Node at 127.0.0.1:18114 [Candidate] entering Candidate state
2016/06/23 07:47:53 [DEBUG] memberlist: Potential blocking operation. Last command took 25.551782ms
2016/06/23 07:47:53 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:53 [DEBUG] raft: Vote granted from 127.0.0.1:18114. Tally: 1
2016/06/23 07:47:53 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:53 [INFO] raft: Node at 127.0.0.1:18114 [Leader] entering Leader state
2016/06/23 07:47:53 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:53 [INFO] consul: New leader elected: Node 114
2016/06/23 07:47:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:53 [DEBUG] raft: Node 127.0.0.1:18114 updated peer set (2): [127.0.0.1:18114]
2016/06/23 07:47:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:54 [INFO] consul: member 'Node 114' joined, marking health alive
2016/06/23 07:47:54 [DEBUG] memberlist: Potential blocking operation. Last command took 39.999558ms
2016/06/23 07:47:55 [DEBUG] dns: request for {_this-is-a-very-very-very-very-very-long-name-for-a-service._master.service.consul. 33 1} (8.747935ms) from client 127.0.0.1:49214 (udp)
2016/06/23 07:47:55 [DEBUG] dns: request for {this-is-a-very-very-very-very-very-long-name-for-a-service.query.consul. 33 1} (1.1007ms) from client 127.0.0.1:57225 (udp)
2016/06/23 07:47:55 [INFO] agent: requesting shutdown
2016/06/23 07:47:55 [INFO] consul: shutting down server
2016/06/23 07:47:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:55 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_LargeResponses (3.61s)
=== RUN   TestDNS_ServiceLookup_MaxResponses
2016/06/23 07:47:56 [INFO] raft: Node at 127.0.0.1:18115 [Follower] entering Follower state
2016/06/23 07:47:56 [INFO] serf: EventMemberJoin: Node 115 127.0.0.1
2016/06/23 07:47:56 [INFO] serf: EventMemberJoin: Node 115.dc1 127.0.0.1
2016/06/23 07:47:56 [INFO] consul: adding LAN server Node 115 (Addr: 127.0.0.1:18115) (DC: dc1)
2016/06/23 07:47:56 [INFO] consul: adding WAN server Node 115.dc1 (Addr: 127.0.0.1:18115) (DC: dc1)
2016/06/23 07:47:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:56 [INFO] raft: Node at 127.0.0.1:18115 [Candidate] entering Candidate state
2016/06/23 07:47:57 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:57 [DEBUG] raft: Vote granted from 127.0.0.1:18115. Tally: 1
2016/06/23 07:47:57 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:57 [INFO] raft: Node at 127.0.0.1:18115 [Leader] entering Leader state
2016/06/23 07:47:57 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:57 [INFO] consul: New leader elected: Node 115
2016/06/23 07:47:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:57 [DEBUG] raft: Node 127.0.0.1:18115 updated peer set (2): [127.0.0.1:18115]
2016/06/23 07:47:58 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:58 [DEBUG] memberlist: Potential blocking operation. Last command took 24.467415ms
2016/06/23 07:47:59 [INFO] consul: member 'Node 115' joined, marking health alive
2016/06/23 07:48:00 [DEBUG] memberlist: Potential blocking operation. Last command took 30.720273ms
2016/06/23 07:48:04 [DEBUG] dns: request for {web.service.consul. 255 1} (1.986394ms) from client 127.0.0.1:51974 (udp)
2016/06/23 07:48:04 [DEBUG] dns: request for {web.service.consul. 1 1} (1.54238ms) from client 127.0.0.1:57281 (udp)
2016/06/23 07:48:04 [DEBUG] dns: request for {web.service.consul. 28 1} (2.011062ms) from client 127.0.0.1:48067 (udp)
2016/06/23 07:48:04 [DEBUG] dns: request for {68975687-aea9-2025-83b1-c2b97f0b0c76.query.consul. 255 1} (2.585413ms) from client 127.0.0.1:35871 (udp)
2016/06/23 07:48:04 [DEBUG] dns: request for {68975687-aea9-2025-83b1-c2b97f0b0c76.query.consul. 1 1} (1.520714ms) from client 127.0.0.1:58102 (udp)
2016/06/23 07:48:04 [DEBUG] dns: request for {68975687-aea9-2025-83b1-c2b97f0b0c76.query.consul. 28 1} (1.30104ms) from client 127.0.0.1:48461 (udp)
2016/06/23 07:48:04 [INFO] agent: requesting shutdown
2016/06/23 07:48:04 [INFO] consul: shutting down server
2016/06/23 07:48:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:04 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_MaxResponses (8.60s)
=== RUN   TestDNS_ServiceLookup_CNAME
2016/06/23 07:48:05 [INFO] serf: EventMemberJoin: Node 117 127.0.0.1
2016/06/23 07:48:05 [INFO] raft: Node at 127.0.0.1:18117 [Follower] entering Follower state
2016/06/23 07:48:05 [INFO] serf: EventMemberJoin: Node 117.dc1 127.0.0.1
2016/06/23 07:48:05 [INFO] consul: adding LAN server Node 117 (Addr: 127.0.0.1:18117) (DC: dc1)
2016/06/23 07:48:05 [INFO] consul: adding WAN server Node 117.dc1 (Addr: 127.0.0.1:18117) (DC: dc1)
2016/06/23 07:48:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:05 [INFO] raft: Node at 127.0.0.1:18117 [Candidate] entering Candidate state
2016/06/23 07:48:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:05 [DEBUG] raft: Vote granted from 127.0.0.1:18117. Tally: 1
2016/06/23 07:48:05 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:05 [INFO] raft: Node at 127.0.0.1:18117 [Leader] entering Leader state
2016/06/23 07:48:05 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:05 [INFO] consul: New leader elected: Node 117
2016/06/23 07:48:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:05 [DEBUG] raft: Node 127.0.0.1:18117 updated peer set (2): [127.0.0.1:18117]
2016/06/23 07:48:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:06 [INFO] consul: member 'Node 117' joined, marking health alive
2016/06/23 07:48:07 [DEBUG] dns: cname recurse RTT for www.google.com. (529.017µs)
2016/06/23 07:48:07 [DEBUG] dns: request for {search.service.consul. 255 1} (2.196401ms) from client 127.0.0.1:36166 (udp)
2016/06/23 07:48:07 [DEBUG] dns: cname recurse RTT for www.google.com. (489.682µs)
2016/06/23 07:48:07 [DEBUG] dns: request for {960b7472-6d3c-32f0-50b2-f49aad22e23e.query.consul. 255 1} (1.96406ms) from client 127.0.0.1:44418 (udp)
2016/06/23 07:48:07 [INFO] agent: requesting shutdown
2016/06/23 07:48:07 [INFO] consul: shutting down server
2016/06/23 07:48:07 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:07 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:48:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:49442
2016/06/23 07:48:08 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_CNAME (3.59s)
=== RUN   TestDNS_NodeLookup_TTL
2016/06/23 07:48:08 [INFO] raft: Node at 127.0.0.1:18119 [Follower] entering Follower state
2016/06/23 07:48:08 [INFO] serf: EventMemberJoin: Node 119 127.0.0.1
2016/06/23 07:48:08 [INFO] serf: EventMemberJoin: Node 119.dc1 127.0.0.1
2016/06/23 07:48:08 [INFO] consul: adding LAN server Node 119 (Addr: 127.0.0.1:18119) (DC: dc1)
2016/06/23 07:48:08 [INFO] consul: adding WAN server Node 119.dc1 (Addr: 127.0.0.1:18119) (DC: dc1)
2016/06/23 07:48:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:08 [INFO] raft: Node at 127.0.0.1:18119 [Candidate] entering Candidate state
2016/06/23 07:48:09 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:09 [DEBUG] raft: Vote granted from 127.0.0.1:18119. Tally: 1
2016/06/23 07:48:09 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:09 [INFO] raft: Node at 127.0.0.1:18119 [Leader] entering Leader state
2016/06/23 07:48:09 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:09 [INFO] consul: New leader elected: Node 119
2016/06/23 07:48:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:09 [DEBUG] raft: Node 127.0.0.1:18119 updated peer set (2): [127.0.0.1:18119]
2016/06/23 07:48:10 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:10 [INFO] consul: member 'Node 119' joined, marking health alive
2016/06/23 07:48:11 [DEBUG] dns: request for {foo.node.consul. 255 1} (888.361µs) from client 127.0.0.1:39514 (udp)
2016/06/23 07:48:11 [DEBUG] dns: request for {bar.node.consul. 255 1} (581.685µs) from client 127.0.0.1:54918 (udp)
2016/06/23 07:48:11 [DEBUG] dns: cname recurse RTT for www.google.com. (494.016µs)
2016/06/23 07:48:11 [DEBUG] dns: request for {google.node.consul. 255 1} (1.765387ms) from client 127.0.0.1:44405 (udp)
2016/06/23 07:48:11 [INFO] agent: requesting shutdown
2016/06/23 07:48:11 [INFO] consul: shutting down server
2016/06/23 07:48:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:12 [INFO] agent: shutdown complete
--- PASS: TestDNS_NodeLookup_TTL (4.27s)
=== RUN   TestDNS_ServiceLookup_TTL
2016/06/23 07:48:13 [INFO] raft: Node at 127.0.0.1:18120 [Follower] entering Follower state
2016/06/23 07:48:13 [INFO] serf: EventMemberJoin: Node 120 127.0.0.1
2016/06/23 07:48:13 [INFO] serf: EventMemberJoin: Node 120.dc1 127.0.0.1
2016/06/23 07:48:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:13 [INFO] raft: Node at 127.0.0.1:18120 [Candidate] entering Candidate state
2016/06/23 07:48:13 [INFO] consul: adding WAN server Node 120.dc1 (Addr: 127.0.0.1:18120) (DC: dc1)
2016/06/23 07:48:13 [INFO] consul: adding LAN server Node 120 (Addr: 127.0.0.1:18120) (DC: dc1)
2016/06/23 07:48:14 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:14 [DEBUG] raft: Vote granted from 127.0.0.1:18120. Tally: 1
2016/06/23 07:48:14 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:14 [INFO] raft: Node at 127.0.0.1:18120 [Leader] entering Leader state
2016/06/23 07:48:14 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:14 [INFO] consul: New leader elected: Node 120
2016/06/23 07:48:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:14 [DEBUG] raft: Node 127.0.0.1:18120 updated peer set (2): [127.0.0.1:18120]
2016/06/23 07:48:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:15 [DEBUG] memberlist: Potential blocking operation. Last command took 44.296689ms
2016/06/23 07:48:15 [INFO] consul: member 'Node 120' joined, marking health alive
2016/06/23 07:48:16 [DEBUG] dns: request for {db.service.consul. 33 1} (878.361µs) from client 127.0.0.1:39633 (udp)
2016/06/23 07:48:16 [DEBUG] dns: request for {api.service.consul. 33 1} (798.357µs) from client 127.0.0.1:48756 (udp)
2016/06/23 07:48:16 [INFO] agent: requesting shutdown
2016/06/23 07:48:16 [INFO] consul: shutting down server
2016/06/23 07:48:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:16 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_TTL (4.11s)
=== RUN   TestDNS_PreparedQuery_TTL
2016/06/23 07:48:17 [INFO] raft: Node at 127.0.0.1:18121 [Follower] entering Follower state
2016/06/23 07:48:17 [INFO] serf: EventMemberJoin: Node 121 127.0.0.1
2016/06/23 07:48:17 [INFO] consul: adding LAN server Node 121 (Addr: 127.0.0.1:18121) (DC: dc1)
2016/06/23 07:48:17 [INFO] serf: EventMemberJoin: Node 121.dc1 127.0.0.1
2016/06/23 07:48:17 [INFO] consul: adding WAN server Node 121.dc1 (Addr: 127.0.0.1:18121) (DC: dc1)
2016/06/23 07:48:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:17 [INFO] raft: Node at 127.0.0.1:18121 [Candidate] entering Candidate state
2016/06/23 07:48:17 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:17 [DEBUG] raft: Vote granted from 127.0.0.1:18121. Tally: 1
2016/06/23 07:48:17 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:17 [INFO] raft: Node at 127.0.0.1:18121 [Leader] entering Leader state
2016/06/23 07:48:17 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:17 [INFO] consul: New leader elected: Node 121
2016/06/23 07:48:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:18 [DEBUG] raft: Node 127.0.0.1:18121 updated peer set (2): [127.0.0.1:18121]
2016/06/23 07:48:18 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:19 [INFO] consul: member 'Node 121' joined, marking health alive
2016/06/23 07:48:21 [DEBUG] dns: request for {db-ttl.query.consul. 33 1} (742.689µs) from client 127.0.0.1:48665 (udp)
2016/06/23 07:48:21 [DEBUG] dns: request for {db-nottl.query.consul. 33 1} (1.665051ms) from client 127.0.0.1:41464 (udp)
2016/06/23 07:48:21 [INFO] agent: requesting shutdown
2016/06/23 07:48:21 [INFO] consul: shutting down server
2016/06/23 07:48:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:21 [DEBUG] dns: request for {api-nottl.query.consul. 33 1} (628.353µs) from client 127.0.0.1:39268 (udp)
2016/06/23 07:48:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:21 [INFO] agent: shutdown complete
--- PASS: TestDNS_PreparedQuery_TTL (5.48s)
=== RUN   TestDNS_ServiceLookup_SRV_RFC
2016/06/23 07:48:22 [INFO] raft: Node at 127.0.0.1:18122 [Follower] entering Follower state
2016/06/23 07:48:22 [INFO] serf: EventMemberJoin: Node 122 127.0.0.1
2016/06/23 07:48:22 [INFO] consul: adding LAN server Node 122 (Addr: 127.0.0.1:18122) (DC: dc1)
2016/06/23 07:48:22 [INFO] serf: EventMemberJoin: Node 122.dc1 127.0.0.1
2016/06/23 07:48:22 [INFO] consul: adding WAN server Node 122.dc1 (Addr: 127.0.0.1:18122) (DC: dc1)
2016/06/23 07:48:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:22 [INFO] raft: Node at 127.0.0.1:18122 [Candidate] entering Candidate state
2016/06/23 07:48:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:23 [DEBUG] raft: Vote granted from 127.0.0.1:18122. Tally: 1
2016/06/23 07:48:23 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:23 [INFO] raft: Node at 127.0.0.1:18122 [Leader] entering Leader state
2016/06/23 07:48:23 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:23 [INFO] consul: New leader elected: Node 122
2016/06/23 07:48:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:23 [DEBUG] raft: Node 127.0.0.1:18122 updated peer set (2): [127.0.0.1:18122]
2016/06/23 07:48:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:24 [INFO] consul: member 'Node 122' joined, marking health alive
2016/06/23 07:48:24 [DEBUG] dns: request for {_db._master.service.consul. 33 1} (957.696µs) from client 127.0.0.1:60962 (udp)
2016/06/23 07:48:24 [INFO] agent: requesting shutdown
2016/06/23 07:48:24 [INFO] consul: shutting down server
2016/06/23 07:48:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:25 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_SRV_RFC (3.22s)
=== RUN   TestDNS_ServiceLookup_SRV_RFC_TCP_Default
2016/06/23 07:48:26 [INFO] raft: Node at 127.0.0.1:18123 [Follower] entering Follower state
2016/06/23 07:48:26 [INFO] serf: EventMemberJoin: Node 123 127.0.0.1
2016/06/23 07:48:26 [INFO] consul: adding LAN server Node 123 (Addr: 127.0.0.1:18123) (DC: dc1)
2016/06/23 07:48:26 [INFO] serf: EventMemberJoin: Node 123.dc1 127.0.0.1
2016/06/23 07:48:26 [INFO] consul: adding WAN server Node 123.dc1 (Addr: 127.0.0.1:18123) (DC: dc1)
2016/06/23 07:48:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:26 [INFO] raft: Node at 127.0.0.1:18123 [Candidate] entering Candidate state
2016/06/23 07:48:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:27 [DEBUG] raft: Vote granted from 127.0.0.1:18123. Tally: 1
2016/06/23 07:48:27 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:27 [INFO] raft: Node at 127.0.0.1:18123 [Leader] entering Leader state
2016/06/23 07:48:27 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:27 [INFO] consul: New leader elected: Node 123
2016/06/23 07:48:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:27 [DEBUG] raft: Node 127.0.0.1:18123 updated peer set (2): [127.0.0.1:18123]
2016/06/23 07:48:27 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:28 [INFO] consul: member 'Node 123' joined, marking health alive
2016/06/23 07:48:29 [DEBUG] memberlist: Potential blocking operation. Last command took 59.627492ms
2016/06/23 07:48:29 [DEBUG] dns: request for {_db._tcp.service.consul. 33 1} (938.362µs) from client 127.0.0.1:54795 (udp)
2016/06/23 07:48:29 [INFO] agent: requesting shutdown
2016/06/23 07:48:29 [INFO] consul: shutting down server
2016/06/23 07:48:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:29 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_SRV_RFC_TCP_Default (4.38s)
=== RUN   TestDNS_ServiceLookup_FilterACL
2016/06/23 07:48:30 [INFO] raft: Node at 127.0.0.1:18124 [Follower] entering Follower state
2016/06/23 07:48:30 [INFO] serf: EventMemberJoin: Node 124 127.0.0.1
2016/06/23 07:48:30 [INFO] consul: adding LAN server Node 124 (Addr: 127.0.0.1:18124) (DC: dc1)
2016/06/23 07:48:30 [INFO] serf: EventMemberJoin: Node 124.dc1 127.0.0.1
2016/06/23 07:48:30 [INFO] consul: adding WAN server Node 124.dc1 (Addr: 127.0.0.1:18124) (DC: dc1)
2016/06/23 07:48:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:30 [INFO] raft: Node at 127.0.0.1:18124 [Candidate] entering Candidate state
2016/06/23 07:48:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:30 [DEBUG] raft: Vote granted from 127.0.0.1:18124. Tally: 1
2016/06/23 07:48:30 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:30 [INFO] raft: Node at 127.0.0.1:18124 [Leader] entering Leader state
2016/06/23 07:48:30 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:30 [INFO] consul: New leader elected: Node 124
2016/06/23 07:48:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:31 [ERR] scada-client: failed to handshake: invalid token
2016/06/23 07:48:31 [DEBUG] raft: Node 127.0.0.1:18124 updated peer set (2): [127.0.0.1:18124]
2016/06/23 07:48:31 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:31 [INFO] consul: member 'Node 124' joined, marking health alive
2016/06/23 07:48:32 [DEBUG] dns: request for {foo.service.consul. 1 1} (882.694µs) from client 127.0.0.1:38088 (udp)
2016/06/23 07:48:32 [DEBUG] consul: dropping node "foo" from result due to ACLs
2016/06/23 07:48:32 [DEBUG] dns: request for {foo.service.consul. 1 1} (1.145369ms) from client 127.0.0.1:55720 (udp)
2016/06/23 07:48:32 [INFO] agent: requesting shutdown
2016/06/23 07:48:32 [INFO] consul: shutting down server
2016/06/23 07:48:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:32 [INFO] agent: shutdown complete
--- PASS: TestDNS_ServiceLookup_FilterACL (3.02s)
=== RUN   TestDNS_NonExistingLookup
2016/06/23 07:48:33 [INFO] raft: Node at 127.0.0.1:18125 [Follower] entering Follower state
2016/06/23 07:48:33 [INFO] serf: EventMemberJoin: Node 125 127.0.0.1
2016/06/23 07:48:33 [INFO] consul: adding LAN server Node 125 (Addr: 127.0.0.1:18125) (DC: dc1)
2016/06/23 07:48:33 [INFO] serf: EventMemberJoin: Node 125.dc1 127.0.0.1
2016/06/23 07:48:33 [INFO] consul: adding WAN server Node 125.dc1 (Addr: 127.0.0.1:18125) (DC: dc1)
2016/06/23 07:48:33 [WARN] dns: QName invalid: nonexisting.
2016/06/23 07:48:33 [INFO] agent: requesting shutdown
2016/06/23 07:48:33 [INFO] consul: shutting down server
2016/06/23 07:48:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:33 [DEBUG] dns: request for {nonexisting.consul. 255 1} (492.682µs) from client 127.0.0.1:51151 (udp)
2016/06/23 07:48:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:33 [INFO] raft: Node at 127.0.0.1:18125 [Candidate] entering Candidate state
2016/06/23 07:48:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:33 [INFO] agent: shutdown complete
--- PASS: TestDNS_NonExistingLookup (1.50s)
=== RUN   TestDNS_NonExistingLookupEmptyAorAAAA
2016/06/23 07:48:34 [INFO] raft: Node at 127.0.0.1:18126 [Follower] entering Follower state
2016/06/23 07:48:34 [INFO] serf: EventMemberJoin: Node 126 127.0.0.1
2016/06/23 07:48:34 [INFO] consul: adding LAN server Node 126 (Addr: 127.0.0.1:18126) (DC: dc1)
2016/06/23 07:48:34 [INFO] serf: EventMemberJoin: Node 126.dc1 127.0.0.1
2016/06/23 07:48:34 [INFO] consul: adding WAN server Node 126.dc1 (Addr: 127.0.0.1:18126) (DC: dc1)
2016/06/23 07:48:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:34 [INFO] raft: Node at 127.0.0.1:18126 [Candidate] entering Candidate state
2016/06/23 07:48:35 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:35 [DEBUG] raft: Vote granted from 127.0.0.1:18126. Tally: 1
2016/06/23 07:48:35 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:35 [INFO] raft: Node at 127.0.0.1:18126 [Leader] entering Leader state
2016/06/23 07:48:35 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:35 [INFO] consul: New leader elected: Node 126
2016/06/23 07:48:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:35 [DEBUG] raft: Node 127.0.0.1:18126 updated peer set (2): [127.0.0.1:18126]
2016/06/23 07:48:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:36 [INFO] consul: member 'Node 126' joined, marking health alive
2016/06/23 07:48:37 [DEBUG] memberlist: Potential blocking operation. Last command took 33.278685ms
2016/06/23 07:48:38 [DEBUG] dns: request for {webv4.service.consul. 28 1} (1.084033ms) from client 127.0.0.1:49987 (udp)
2016/06/23 07:48:38 [DEBUG] dns: request for {webv4.query.consul. 28 1} (574.018µs) from client 127.0.0.1:34013 (udp)
2016/06/23 07:48:38 [DEBUG] dns: request for {webv6.service.consul. 1 1} (742.356µs) from client 127.0.0.1:50187 (udp)
2016/06/23 07:48:38 [INFO] agent: requesting shutdown
2016/06/23 07:48:38 [INFO] consul: shutting down server
2016/06/23 07:48:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:38 [DEBUG] dns: request for {webv6.query.consul. 1 1} (564.684µs) from client 127.0.0.1:51913 (udp)
2016/06/23 07:48:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:38 [INFO] agent: shutdown complete
--- PASS: TestDNS_NonExistingLookupEmptyAorAAAA (4.49s)
=== RUN   TestDNS_PreparedQuery_AllowStale
2016/06/23 07:48:39 [INFO] raft: Node at 127.0.0.1:18127 [Follower] entering Follower state
2016/06/23 07:48:39 [INFO] serf: EventMemberJoin: Node 127 127.0.0.1
2016/06/23 07:48:39 [INFO] consul: adding LAN server Node 127 (Addr: 127.0.0.1:18127) (DC: dc1)
2016/06/23 07:48:39 [INFO] serf: EventMemberJoin: Node 127.dc1 127.0.0.1
2016/06/23 07:48:39 [INFO] consul: adding WAN server Node 127.dc1 (Addr: 127.0.0.1:18127) (DC: dc1)
2016/06/23 07:48:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:39 [INFO] raft: Node at 127.0.0.1:18127 [Candidate] entering Candidate state
2016/06/23 07:48:39 [DEBUG] memberlist: Potential blocking operation. Last command took 28.935553ms
2016/06/23 07:48:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:39 [DEBUG] raft: Vote granted from 127.0.0.1:18127. Tally: 1
2016/06/23 07:48:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:39 [INFO] raft: Node at 127.0.0.1:18127 [Leader] entering Leader state
2016/06/23 07:48:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:39 [INFO] consul: New leader elected: Node 127
2016/06/23 07:48:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:40 [DEBUG] raft: Node 127.0.0.1:18127 updated peer set (2): [127.0.0.1:18127]
2016/06/23 07:48:40 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:41 [INFO] consul: member 'Node 127' joined, marking health alive
2016/06/23 07:48:41 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:48:41 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:48:41 [WARN] dns: Query results too stale, re-requesting
2016/06/23 07:48:41 [DEBUG] dns: request for {nope.query.consul. 33 1} (935.362µs) from client 127.0.0.1:51143 (udp)
2016/06/23 07:48:41 [INFO] agent: requesting shutdown
2016/06/23 07:48:41 [INFO] consul: shutting down server
2016/06/23 07:48:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:41 [INFO] agent: shutdown complete
--- PASS: TestDNS_PreparedQuery_AllowStale (3.52s)
=== RUN   TestDNS_InvalidQueries
2016/06/23 07:48:42 [INFO] raft: Node at 127.0.0.1:18128 [Follower] entering Follower state
2016/06/23 07:48:42 [INFO] serf: EventMemberJoin: Node 128 127.0.0.1
2016/06/23 07:48:42 [INFO] consul: adding LAN server Node 128 (Addr: 127.0.0.1:18128) (DC: dc1)
2016/06/23 07:48:42 [INFO] serf: EventMemberJoin: Node 128.dc1 127.0.0.1
2016/06/23 07:48:42 [INFO] consul: adding WAN server Node 128.dc1 (Addr: 127.0.0.1:18128) (DC: dc1)
2016/06/23 07:48:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:42 [INFO] raft: Node at 127.0.0.1:18128 [Candidate] entering Candidate state
2016/06/23 07:48:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:43 [DEBUG] raft: Vote granted from 127.0.0.1:18128. Tally: 1
2016/06/23 07:48:43 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:43 [INFO] raft: Node at 127.0.0.1:18128 [Leader] entering Leader state
2016/06/23 07:48:43 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:43 [INFO] consul: New leader elected: Node 128
2016/06/23 07:48:43 [DEBUG] memberlist: Potential blocking operation. Last command took 35.885431ms
2016/06/23 07:48:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:43 [DEBUG] raft: Node 127.0.0.1:18128 updated peer set (2): [127.0.0.1:18128]
2016/06/23 07:48:44 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:44 [INFO] consul: member 'Node 128' joined, marking health alive
2016/06/23 07:48:45 [WARN] dns: QName invalid: 
2016/06/23 07:48:45 [DEBUG] dns: request for {consul. 33 1} (365.011µs) from client 127.0.0.1:33167 (udp)
2016/06/23 07:48:45 [WARN] dns: QName invalid: node.
2016/06/23 07:48:45 [DEBUG] dns: request for {node.consul. 33 1} (344.343µs) from client 127.0.0.1:45169 (udp)
2016/06/23 07:48:45 [WARN] dns: QName invalid: service.
2016/06/23 07:48:45 [DEBUG] dns: request for {service.consul. 33 1} (268.341µs) from client 127.0.0.1:57503 (udp)
2016/06/23 07:48:45 [WARN] dns: QName invalid: query.
2016/06/23 07:48:45 [DEBUG] dns: request for {query.consul. 33 1} (277.342µs) from client 127.0.0.1:49495 (udp)
2016/06/23 07:48:45 [INFO] agent: requesting shutdown
2016/06/23 07:48:45 [INFO] consul: shutting down server
2016/06/23 07:48:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:45 [INFO] agent: shutdown complete
--- PASS: TestDNS_InvalidQueries (3.39s)
=== RUN   TestEventFire
2016/06/23 07:48:46 [DEBUG] memberlist: Potential blocking operation. Last command took 30.774276ms
2016/06/23 07:48:46 [INFO] serf: EventMemberJoin: Node 129 127.0.0.1
2016/06/23 07:48:46 [INFO] serf: EventMemberJoin: Node 129.dc1 127.0.0.1
2016/06/23 07:48:46 [INFO] raft: Node at 127.0.0.1:18129 [Follower] entering Follower state
2016/06/23 07:48:46 [INFO] consul: adding LAN server Node 129 (Addr: 127.0.0.1:18129) (DC: dc1)
2016/06/23 07:48:46 [INFO] consul: adding WAN server Node 129.dc1 (Addr: 127.0.0.1:18129) (DC: dc1)
2016/06/23 07:48:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:46 [INFO] raft: Node at 127.0.0.1:18129 [Candidate] entering Candidate state
2016/06/23 07:48:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:46 [DEBUG] raft: Vote granted from 127.0.0.1:18129. Tally: 1
2016/06/23 07:48:46 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:46 [INFO] raft: Node at 127.0.0.1:18129 [Leader] entering Leader state
2016/06/23 07:48:46 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:46 [INFO] consul: New leader elected: Node 129
2016/06/23 07:48:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:47 [DEBUG] raft: Node 127.0.0.1:18129 updated peer set (2): [127.0.0.1:18129]
2016/06/23 07:48:47 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:47 [INFO] consul: member 'Node 129' joined, marking health alive
2016/06/23 07:48:47 [INFO] agent: requesting shutdown
2016/06/23 07:48:47 [INFO] consul: shutting down server
2016/06/23 07:48:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:47 [INFO] agent: shutdown complete
2016/06/23 07:48:47 [DEBUG] http: Shutting down http server (127.0.0.1:18929)
--- PASS: TestEventFire (2.56s)
=== RUN   TestEventFire_token
2016/06/23 07:48:49 [INFO] raft: Node at 127.0.0.1:18130 [Follower] entering Follower state
2016/06/23 07:48:49 [INFO] serf: EventMemberJoin: Node 130 127.0.0.1
2016/06/23 07:48:49 [INFO] consul: adding LAN server Node 130 (Addr: 127.0.0.1:18130) (DC: dc1)
2016/06/23 07:48:49 [INFO] serf: EventMemberJoin: Node 130.dc1 127.0.0.1
2016/06/23 07:48:49 [INFO] consul: adding WAN server Node 130.dc1 (Addr: 127.0.0.1:18130) (DC: dc1)
2016/06/23 07:48:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:49 [INFO] raft: Node at 127.0.0.1:18130 [Candidate] entering Candidate state
2016/06/23 07:48:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:50 [DEBUG] raft: Vote granted from 127.0.0.1:18130. Tally: 1
2016/06/23 07:48:50 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:50 [INFO] raft: Node at 127.0.0.1:18130 [Leader] entering Leader state
2016/06/23 07:48:50 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:50 [INFO] consul: New leader elected: Node 130
2016/06/23 07:48:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:50 [DEBUG] raft: Node 127.0.0.1:18130 updated peer set (2): [127.0.0.1:18130]
2016/06/23 07:48:50 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:51 [INFO] consul: member 'Node 130' joined, marking health alive
2016/06/23 07:48:52 [WARN] consul: user event "foo" blocked by ACLs
2016/06/23 07:48:52 [WARN] consul: user event "bar" blocked by ACLs
2016/06/23 07:48:52 [INFO] agent: requesting shutdown
2016/06/23 07:48:52 [INFO] consul: shutting down server
2016/06/23 07:48:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:52 [INFO] agent: shutdown complete
2016/06/23 07:48:52 [DEBUG] http: Shutting down http server (127.0.0.1:18930)
--- PASS: TestEventFire_token (4.35s)
=== RUN   TestEventList
2016/06/23 07:48:53 [INFO] raft: Node at 127.0.0.1:18131 [Follower] entering Follower state
2016/06/23 07:48:53 [INFO] serf: EventMemberJoin: Node 131 127.0.0.1
2016/06/23 07:48:53 [INFO] consul: adding LAN server Node 131 (Addr: 127.0.0.1:18131) (DC: dc1)
2016/06/23 07:48:53 [INFO] serf: EventMemberJoin: Node 131.dc1 127.0.0.1
2016/06/23 07:48:53 [INFO] consul: adding WAN server Node 131.dc1 (Addr: 127.0.0.1:18131) (DC: dc1)
2016/06/23 07:48:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:53 [INFO] raft: Node at 127.0.0.1:18131 [Candidate] entering Candidate state
2016/06/23 07:48:54 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:54 [DEBUG] raft: Vote granted from 127.0.0.1:18131. Tally: 1
2016/06/23 07:48:54 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:54 [INFO] raft: Node at 127.0.0.1:18131 [Leader] entering Leader state
2016/06/23 07:48:54 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:54 [INFO] consul: New leader elected: Node 131
2016/06/23 07:48:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:54 [DEBUG] raft: Node 127.0.0.1:18131 updated peer set (2): [127.0.0.1:18131]
2016/06/23 07:48:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:48:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:37738
2016/06/23 07:48:55 [INFO] consul: member 'Node 131' joined, marking health alive
2016/06/23 07:48:55 [DEBUG] consul: user event: test
2016/06/23 07:48:55 [DEBUG] agent: new event: test (f7974f2d-385b-1d14-d826-cb7ed9051eeb)
2016/06/23 07:48:55 [INFO] agent: requesting shutdown
2016/06/23 07:48:55 [INFO] consul: shutting down server
2016/06/23 07:48:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:56 [INFO] agent: shutdown complete
2016/06/23 07:48:56 [DEBUG] http: Shutting down http server (127.0.0.1:18931)
--- PASS: TestEventList (3.82s)
=== RUN   TestEventList_Filter
2016/06/23 07:48:57 [DEBUG] memberlist: Potential blocking operation. Last command took 33.42369ms
2016/06/23 07:48:57 [INFO] raft: Node at 127.0.0.1:18132 [Follower] entering Follower state
2016/06/23 07:48:57 [INFO] serf: EventMemberJoin: Node 132 127.0.0.1
2016/06/23 07:48:57 [INFO] serf: EventMemberJoin: Node 132.dc1 127.0.0.1
2016/06/23 07:48:57 [INFO] consul: adding LAN server Node 132 (Addr: 127.0.0.1:18132) (DC: dc1)
2016/06/23 07:48:57 [INFO] consul: adding WAN server Node 132.dc1 (Addr: 127.0.0.1:18132) (DC: dc1)
2016/06/23 07:48:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:48:57 [INFO] raft: Node at 127.0.0.1:18132 [Candidate] entering Candidate state
2016/06/23 07:48:58 [DEBUG] raft: Votes needed: 1
2016/06/23 07:48:58 [DEBUG] raft: Vote granted from 127.0.0.1:18132. Tally: 1
2016/06/23 07:48:58 [INFO] raft: Election won. Tally: 1
2016/06/23 07:48:58 [INFO] raft: Node at 127.0.0.1:18132 [Leader] entering Leader state
2016/06/23 07:48:58 [INFO] consul: cluster leadership acquired
2016/06/23 07:48:58 [INFO] consul: New leader elected: Node 132
2016/06/23 07:48:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:48:58 [DEBUG] raft: Node 127.0.0.1:18132 updated peer set (2): [127.0.0.1:18132]
2016/06/23 07:48:58 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:48:58 [INFO] consul: member 'Node 132' joined, marking health alive
2016/06/23 07:48:59 [DEBUG] consul: user event: test
2016/06/23 07:48:59 [DEBUG] consul: user event: foo
2016/06/23 07:48:59 [DEBUG] agent: new event: test (4821470d-d831-9a3c-908e-cb5f615f9f8e)
2016/06/23 07:48:59 [DEBUG] agent: new event: foo (26571565-2c36-e50e-66f0-827afdd4c6f8)
2016/06/23 07:48:59 [INFO] agent: requesting shutdown
2016/06/23 07:48:59 [INFO] consul: shutting down server
2016/06/23 07:48:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:48:59 [INFO] agent: shutdown complete
2016/06/23 07:48:59 [DEBUG] http: Shutting down http server (127.0.0.1:18932)
--- PASS: TestEventList_Filter (3.29s)
=== RUN   TestEventList_Blocking
2016/06/23 07:48:59 [INFO] raft: Node at 127.0.0.1:18133 [Follower] entering Follower state
2016/06/23 07:48:59 [INFO] serf: EventMemberJoin: Node 133 127.0.0.1
2016/06/23 07:48:59 [INFO] consul: adding LAN server Node 133 (Addr: 127.0.0.1:18133) (DC: dc1)
2016/06/23 07:48:59 [INFO] serf: EventMemberJoin: Node 133.dc1 127.0.0.1
2016/06/23 07:48:59 [INFO] consul: adding WAN server Node 133.dc1 (Addr: 127.0.0.1:18133) (DC: dc1)
2016/06/23 07:49:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:00 [INFO] raft: Node at 127.0.0.1:18133 [Candidate] entering Candidate state
2016/06/23 07:49:00 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:00 [DEBUG] raft: Vote granted from 127.0.0.1:18133. Tally: 1
2016/06/23 07:49:00 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:00 [INFO] raft: Node at 127.0.0.1:18133 [Leader] entering Leader state
2016/06/23 07:49:00 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:00 [INFO] consul: New leader elected: Node 133
2016/06/23 07:49:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:00 [DEBUG] raft: Node 127.0.0.1:18133 updated peer set (2): [127.0.0.1:18133]
2016/06/23 07:49:00 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:01 [INFO] consul: member 'Node 133' joined, marking health alive
2016/06/23 07:49:01 [DEBUG] consul: user event: test
2016/06/23 07:49:01 [DEBUG] agent: new event: test (ce625455-4779-0972-7566-91e7d7342a8e)
2016/06/23 07:49:01 [DEBUG] consul: user event: second
2016/06/23 07:49:01 [DEBUG] agent: new event: second (1d6fd02c-f8dd-62d7-9a5d-baf99b77b7d9)
2016/06/23 07:49:01 [INFO] agent: requesting shutdown
2016/06/23 07:49:01 [INFO] consul: shutting down server
2016/06/23 07:49:01 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:01 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:01 [INFO] agent: shutdown complete
2016/06/23 07:49:01 [DEBUG] http: Shutting down http server (127.0.0.1:18933)
--- PASS: TestEventList_Blocking (2.15s)
=== RUN   TestEventList_EventBufOrder
2016/06/23 07:49:01 [DEBUG] memberlist: Potential blocking operation. Last command took 34.392719ms
2016/06/23 07:49:02 [INFO] raft: Node at 127.0.0.1:18134 [Follower] entering Follower state
2016/06/23 07:49:02 [INFO] serf: EventMemberJoin: Node 134 127.0.0.1
2016/06/23 07:49:02 [INFO] serf: EventMemberJoin: Node 134.dc1 127.0.0.1
2016/06/23 07:49:02 [INFO] consul: adding LAN server Node 134 (Addr: 127.0.0.1:18134) (DC: dc1)
2016/06/23 07:49:02 [INFO] consul: adding WAN server Node 134.dc1 (Addr: 127.0.0.1:18134) (DC: dc1)
2016/06/23 07:49:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:02 [INFO] raft: Node at 127.0.0.1:18134 [Candidate] entering Candidate state
2016/06/23 07:49:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:02 [DEBUG] raft: Vote granted from 127.0.0.1:18134. Tally: 1
2016/06/23 07:49:02 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:02 [INFO] raft: Node at 127.0.0.1:18134 [Leader] entering Leader state
2016/06/23 07:49:02 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:02 [INFO] consul: New leader elected: Node 134
2016/06/23 07:49:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:02 [DEBUG] raft: Node 127.0.0.1:18134 updated peer set (2): [127.0.0.1:18134]
2016/06/23 07:49:03 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:03 [INFO] consul: member 'Node 134' joined, marking health alive
2016/06/23 07:49:03 [DEBUG] consul: user event: foo
2016/06/23 07:49:03 [DEBUG] consul: user event: bar
2016/06/23 07:49:03 [DEBUG] consul: user event: foo
2016/06/23 07:49:03 [DEBUG] consul: user event: foo
2016/06/23 07:49:03 [DEBUG] consul: user event: bar
2016/06/23 07:49:03 [DEBUG] agent: new event: foo (289e602c-8df3-c930-68e5-c7eea98d3664)
2016/06/23 07:49:03 [DEBUG] agent: new event: bar (b7b012ee-d5ed-bd5d-67ab-2550e802f4d2)
2016/06/23 07:49:03 [DEBUG] agent: new event: foo (2f95b24f-db7c-741e-883f-0f1026d03793)
2016/06/23 07:49:03 [DEBUG] agent: new event: foo (734d20a1-3714-e410-c941-3217d117bc35)
2016/06/23 07:49:03 [DEBUG] agent: new event: bar (363f11a6-4127-c994-249a-fa00143c4bf2)
2016/06/23 07:49:03 [INFO] agent: requesting shutdown
2016/06/23 07:49:03 [INFO] consul: shutting down server
2016/06/23 07:49:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:04 [INFO] agent: shutdown complete
2016/06/23 07:49:04 [DEBUG] http: Shutting down http server (127.0.0.1:18934)
--- PASS: TestEventList_EventBufOrder (2.45s)
=== RUN   TestUUIDToUint64
--- PASS: TestUUIDToUint64 (0.00s)
=== RUN   TestAppendSliceValue_implements
--- PASS: TestAppendSliceValue_implements (0.00s)
=== RUN   TestAppendSliceValueSet
--- PASS: TestAppendSliceValueSet (0.00s)
=== RUN   TestGatedWriter_impl
--- PASS: TestGatedWriter_impl (0.00s)
=== RUN   TestGatedWriter
--- PASS: TestGatedWriter (0.00s)
=== RUN   TestHealthChecksInState
2016/06/23 07:49:04 [INFO] raft: Node at 127.0.0.1:18135 [Follower] entering Follower state
2016/06/23 07:49:04 [INFO] serf: EventMemberJoin: Node 135 127.0.0.1
2016/06/23 07:49:04 [INFO] consul: adding LAN server Node 135 (Addr: 127.0.0.1:18135) (DC: dc1)
2016/06/23 07:49:04 [INFO] serf: EventMemberJoin: Node 135.dc1 127.0.0.1
2016/06/23 07:49:04 [INFO] consul: adding WAN server Node 135.dc1 (Addr: 127.0.0.1:18135) (DC: dc1)
2016/06/23 07:49:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:04 [INFO] raft: Node at 127.0.0.1:18135 [Candidate] entering Candidate state
2016/06/23 07:49:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:05 [DEBUG] raft: Vote granted from 127.0.0.1:18135. Tally: 1
2016/06/23 07:49:05 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:05 [INFO] raft: Node at 127.0.0.1:18135 [Leader] entering Leader state
2016/06/23 07:49:05 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:05 [INFO] consul: New leader elected: Node 135
2016/06/23 07:49:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:05 [DEBUG] raft: Node 127.0.0.1:18135 updated peer set (2): [127.0.0.1:18135]
2016/06/23 07:49:05 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:06 [DEBUG] memberlist: Potential blocking operation. Last command took 17.829545ms
2016/06/23 07:49:06 [INFO] consul: member 'Node 135' joined, marking health alive
2016/06/23 07:49:06 [INFO] agent: requesting shutdown
2016/06/23 07:49:06 [INFO] consul: shutting down server
2016/06/23 07:49:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:06 [INFO] agent: shutdown complete
2016/06/23 07:49:06 [DEBUG] http: Shutting down http server (127.0.0.1:18935)
2016/06/23 07:49:07 [INFO] raft: Node at 127.0.0.1:18136 [Follower] entering Follower state
2016/06/23 07:49:07 [INFO] serf: EventMemberJoin: Node 136 127.0.0.1
2016/06/23 07:49:07 [INFO] consul: adding LAN server Node 136 (Addr: 127.0.0.1:18136) (DC: dc1)
2016/06/23 07:49:07 [INFO] serf: EventMemberJoin: Node 136.dc1 127.0.0.1
2016/06/23 07:49:07 [INFO] consul: adding WAN server Node 136.dc1 (Addr: 127.0.0.1:18136) (DC: dc1)
2016/06/23 07:49:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:07 [INFO] raft: Node at 127.0.0.1:18136 [Candidate] entering Candidate state
2016/06/23 07:49:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:49:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:50258
2016/06/23 07:49:09 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:09 [DEBUG] raft: Vote granted from 127.0.0.1:18136. Tally: 1
2016/06/23 07:49:09 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:09 [INFO] raft: Node at 127.0.0.1:18136 [Leader] entering Leader state
2016/06/23 07:49:09 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:09 [INFO] consul: New leader elected: Node 136
2016/06/23 07:49:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:09 [DEBUG] raft: Node 127.0.0.1:18136 updated peer set (2): [127.0.0.1:18136]
2016/06/23 07:49:09 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:10 [INFO] consul: member 'Node 136' joined, marking health alive
2016/06/23 07:49:10 [DEBUG] memberlist: Potential blocking operation. Last command took 57.819103ms
2016/06/23 07:49:10 [INFO] agent: requesting shutdown
2016/06/23 07:49:10 [INFO] consul: shutting down server
2016/06/23 07:49:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:11 [INFO] agent: shutdown complete
2016/06/23 07:49:11 [DEBUG] http: Shutting down http server (127.0.0.1:18936)
--- PASS: TestHealthChecksInState (6.97s)
=== RUN   TestHealthChecksInState_DistanceSort
2016/06/23 07:49:11 [INFO] raft: Node at 127.0.0.1:18137 [Follower] entering Follower state
2016/06/23 07:49:11 [INFO] serf: EventMemberJoin: Node 137 127.0.0.1
2016/06/23 07:49:11 [INFO] consul: adding LAN server Node 137 (Addr: 127.0.0.1:18137) (DC: dc1)
2016/06/23 07:49:11 [INFO] serf: EventMemberJoin: Node 137.dc1 127.0.0.1
2016/06/23 07:49:11 [INFO] consul: adding WAN server Node 137.dc1 (Addr: 127.0.0.1:18137) (DC: dc1)
2016/06/23 07:49:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:11 [INFO] raft: Node at 127.0.0.1:18137 [Candidate] entering Candidate state
2016/06/23 07:49:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:12 [DEBUG] raft: Vote granted from 127.0.0.1:18137. Tally: 1
2016/06/23 07:49:12 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:12 [INFO] raft: Node at 127.0.0.1:18137 [Leader] entering Leader state
2016/06/23 07:49:12 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:12 [INFO] consul: New leader elected: Node 137
2016/06/23 07:49:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:12 [DEBUG] raft: Node 127.0.0.1:18137 updated peer set (2): [127.0.0.1:18137]
2016/06/23 07:49:12 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:13 [INFO] consul: member 'Node 137' joined, marking health alive
2016/06/23 07:49:14 [INFO] agent: requesting shutdown
2016/06/23 07:49:14 [INFO] consul: shutting down server
2016/06/23 07:49:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:14 [INFO] agent: shutdown complete
2016/06/23 07:49:14 [DEBUG] http: Shutting down http server (127.0.0.1:18937)
--- FAIL: TestHealthChecksInState_DistanceSort (3.50s)
	health_endpoint_test.go:143: bad: [0x11f9edc0 0x11e29810]
=== RUN   TestHealthNodeChecks
2016/06/23 07:49:15 [INFO] raft: Node at 127.0.0.1:18138 [Follower] entering Follower state
2016/06/23 07:49:15 [INFO] serf: EventMemberJoin: Node 138 127.0.0.1
2016/06/23 07:49:15 [INFO] consul: adding LAN server Node 138 (Addr: 127.0.0.1:18138) (DC: dc1)
2016/06/23 07:49:15 [INFO] serf: EventMemberJoin: Node 138.dc1 127.0.0.1
2016/06/23 07:49:15 [INFO] consul: adding WAN server Node 138.dc1 (Addr: 127.0.0.1:18138) (DC: dc1)
2016/06/23 07:49:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:15 [INFO] raft: Node at 127.0.0.1:18138 [Candidate] entering Candidate state
2016/06/23 07:49:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:15 [DEBUG] raft: Vote granted from 127.0.0.1:18138. Tally: 1
2016/06/23 07:49:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:15 [INFO] raft: Node at 127.0.0.1:18138 [Leader] entering Leader state
2016/06/23 07:49:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:15 [INFO] consul: New leader elected: Node 138
2016/06/23 07:49:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:16 [DEBUG] raft: Node 127.0.0.1:18138 updated peer set (2): [127.0.0.1:18138]
2016/06/23 07:49:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:16 [INFO] consul: member 'Node 138' joined, marking health alive
2016/06/23 07:49:16 [INFO] agent: requesting shutdown
2016/06/23 07:49:16 [INFO] consul: shutting down server
2016/06/23 07:49:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:17 [INFO] agent: shutdown complete
2016/06/23 07:49:17 [DEBUG] http: Shutting down http server (127.0.0.1:18938)
--- PASS: TestHealthNodeChecks (2.58s)
=== RUN   TestHealthServiceChecks
2016/06/23 07:49:18 [INFO] raft: Node at 127.0.0.1:18139 [Follower] entering Follower state
2016/06/23 07:49:18 [INFO] serf: EventMemberJoin: Node 139 127.0.0.1
2016/06/23 07:49:18 [INFO] consul: adding LAN server Node 139 (Addr: 127.0.0.1:18139) (DC: dc1)
2016/06/23 07:49:18 [INFO] serf: EventMemberJoin: Node 139.dc1 127.0.0.1
2016/06/23 07:49:18 [INFO] consul: adding WAN server Node 139.dc1 (Addr: 127.0.0.1:18139) (DC: dc1)
2016/06/23 07:49:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:18 [INFO] raft: Node at 127.0.0.1:18139 [Candidate] entering Candidate state
2016/06/23 07:49:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:18 [DEBUG] raft: Vote granted from 127.0.0.1:18139. Tally: 1
2016/06/23 07:49:18 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:18 [INFO] raft: Node at 127.0.0.1:18139 [Leader] entering Leader state
2016/06/23 07:49:18 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:18 [INFO] consul: New leader elected: Node 139
2016/06/23 07:49:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:19 [DEBUG] raft: Node 127.0.0.1:18139 updated peer set (2): [127.0.0.1:18139]
2016/06/23 07:49:19 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:20 [INFO] consul: member 'Node 139' joined, marking health alive
2016/06/23 07:49:20 [INFO] agent: requesting shutdown
2016/06/23 07:49:20 [INFO] consul: shutting down server
2016/06/23 07:49:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:20 [INFO] agent: shutdown complete
2016/06/23 07:49:20 [DEBUG] http: Shutting down http server (127.0.0.1:18939)
--- PASS: TestHealthServiceChecks (3.58s)
=== RUN   TestHealthServiceChecks_DistanceSort
2016/06/23 07:49:21 [INFO] raft: Node at 127.0.0.1:18140 [Follower] entering Follower state
2016/06/23 07:49:21 [INFO] serf: EventMemberJoin: Node 140 127.0.0.1
2016/06/23 07:49:21 [INFO] consul: adding LAN server Node 140 (Addr: 127.0.0.1:18140) (DC: dc1)
2016/06/23 07:49:21 [INFO] serf: EventMemberJoin: Node 140.dc1 127.0.0.1
2016/06/23 07:49:21 [INFO] consul: adding WAN server Node 140.dc1 (Addr: 127.0.0.1:18140) (DC: dc1)
2016/06/23 07:49:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:21 [INFO] raft: Node at 127.0.0.1:18140 [Candidate] entering Candidate state
2016/06/23 07:49:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:22 [DEBUG] raft: Vote granted from 127.0.0.1:18140. Tally: 1
2016/06/23 07:49:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:22 [INFO] raft: Node at 127.0.0.1:18140 [Leader] entering Leader state
2016/06/23 07:49:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:22 [INFO] consul: New leader elected: Node 140
2016/06/23 07:49:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:22 [DEBUG] raft: Node 127.0.0.1:18140 updated peer set (2): [127.0.0.1:18140]
2016/06/23 07:49:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:22 [INFO] consul: member 'Node 140' joined, marking health alive
2016/06/23 07:49:24 [INFO] agent: requesting shutdown
2016/06/23 07:49:24 [INFO] consul: shutting down server
2016/06/23 07:49:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:24 [INFO] agent: shutdown complete
2016/06/23 07:49:24 [DEBUG] http: Shutting down http server (127.0.0.1:18940)
--- FAIL: TestHealthServiceChecks_DistanceSort (3.89s)
	health_endpoint_test.go:337: bad: [0x124f20f0 0x124f2550]
=== RUN   TestHealthServiceNodes
2016/06/23 07:49:25 [INFO] raft: Node at 127.0.0.1:18141 [Follower] entering Follower state
2016/06/23 07:49:25 [INFO] serf: EventMemberJoin: Node 141 127.0.0.1
2016/06/23 07:49:25 [INFO] consul: adding LAN server Node 141 (Addr: 127.0.0.1:18141) (DC: dc1)
2016/06/23 07:49:25 [INFO] serf: EventMemberJoin: Node 141.dc1 127.0.0.1
2016/06/23 07:49:25 [INFO] consul: adding WAN server Node 141.dc1 (Addr: 127.0.0.1:18141) (DC: dc1)
2016/06/23 07:49:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:25 [INFO] raft: Node at 127.0.0.1:18141 [Candidate] entering Candidate state
2016/06/23 07:49:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:26 [DEBUG] raft: Vote granted from 127.0.0.1:18141. Tally: 1
2016/06/23 07:49:26 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:26 [INFO] raft: Node at 127.0.0.1:18141 [Leader] entering Leader state
2016/06/23 07:49:26 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:26 [INFO] consul: New leader elected: Node 141
2016/06/23 07:49:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:26 [DEBUG] raft: Node 127.0.0.1:18141 updated peer set (2): [127.0.0.1:18141]
2016/06/23 07:49:26 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:27 [INFO] consul: member 'Node 141' joined, marking health alive
2016/06/23 07:49:27 [DEBUG] memberlist: Potential blocking operation. Last command took 10.282982ms
2016/06/23 07:49:27 [INFO] agent: requesting shutdown
2016/06/23 07:49:27 [INFO] consul: shutting down server
2016/06/23 07:49:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:27 [INFO] agent: shutdown complete
2016/06/23 07:49:27 [DEBUG] http: Shutting down http server (127.0.0.1:18941)
--- PASS: TestHealthServiceNodes (3.28s)
=== RUN   TestHealthServiceNodes_DistanceSort
2016/06/23 07:49:28 [INFO] raft: Node at 127.0.0.1:18142 [Follower] entering Follower state
2016/06/23 07:49:28 [INFO] serf: EventMemberJoin: Node 142 127.0.0.1
2016/06/23 07:49:28 [INFO] consul: adding LAN server Node 142 (Addr: 127.0.0.1:18142) (DC: dc1)
2016/06/23 07:49:28 [INFO] serf: EventMemberJoin: Node 142.dc1 127.0.0.1
2016/06/23 07:49:28 [INFO] consul: adding WAN server Node 142.dc1 (Addr: 127.0.0.1:18142) (DC: dc1)
2016/06/23 07:49:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:28 [INFO] raft: Node at 127.0.0.1:18142 [Candidate] entering Candidate state
2016/06/23 07:49:28 [DEBUG] memberlist: Potential blocking operation. Last command took 42.321295ms
2016/06/23 07:49:29 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:29 [DEBUG] raft: Vote granted from 127.0.0.1:18142. Tally: 1
2016/06/23 07:49:29 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:29 [INFO] raft: Node at 127.0.0.1:18142 [Leader] entering Leader state
2016/06/23 07:49:29 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:29 [INFO] consul: New leader elected: Node 142
2016/06/23 07:49:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:29 [DEBUG] raft: Node 127.0.0.1:18142 updated peer set (2): [127.0.0.1:18142]
2016/06/23 07:49:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:30 [INFO] consul: member 'Node 142' joined, marking health alive
2016/06/23 07:49:31 [INFO] agent: requesting shutdown
2016/06/23 07:49:31 [INFO] consul: shutting down server
2016/06/23 07:49:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:31 [INFO] agent: shutdown complete
2016/06/23 07:49:31 [DEBUG] http: Shutting down http server (127.0.0.1:18942)
--- FAIL: TestHealthServiceNodes_DistanceSort (3.67s)
	health_endpoint_test.go:504: bad: [{0x10f334a0 0x121e3380 [0x11e29d10]} {0x113cc060 0x121e33c0 [0x11e28280]}]
=== RUN   TestHealthServiceNodes_PassingFilter
2016/06/23 07:49:32 [INFO] serf: EventMemberJoin: Node 143 127.0.0.1
2016/06/23 07:49:32 [INFO] serf: EventMemberJoin: Node 143.dc1 127.0.0.1
2016/06/23 07:49:32 [INFO] raft: Node at 127.0.0.1:18143 [Follower] entering Follower state
2016/06/23 07:49:32 [INFO] consul: adding LAN server Node 143 (Addr: 127.0.0.1:18143) (DC: dc1)
2016/06/23 07:49:32 [INFO] consul: adding WAN server Node 143.dc1 (Addr: 127.0.0.1:18143) (DC: dc1)
2016/06/23 07:49:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:32 [INFO] raft: Node at 127.0.0.1:18143 [Candidate] entering Candidate state
2016/06/23 07:49:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:32 [DEBUG] raft: Vote granted from 127.0.0.1:18143. Tally: 1
2016/06/23 07:49:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:32 [INFO] raft: Node at 127.0.0.1:18143 [Leader] entering Leader state
2016/06/23 07:49:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:32 [INFO] consul: New leader elected: Node 143
2016/06/23 07:49:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:33 [DEBUG] raft: Node 127.0.0.1:18143 updated peer set (2): [127.0.0.1:18143]
2016/06/23 07:49:33 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:33 [INFO] consul: member 'Node 143' joined, marking health alive
2016/06/23 07:49:34 [INFO] agent: requesting shutdown
2016/06/23 07:49:34 [INFO] consul: shutting down server
2016/06/23 07:49:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:34 [INFO] agent: shutdown complete
2016/06/23 07:49:34 [DEBUG] http: Shutting down http server (127.0.0.1:18943)
--- PASS: TestHealthServiceNodes_PassingFilter (3.05s)
=== RUN   TestFilterNonPassing
--- PASS: TestFilterNonPassing (0.00s)
=== RUN   TestHTTPServer_UnixSocket
2016/06/23 07:49:35 [INFO] raft: Node at 127.0.0.1:18144 [Follower] entering Follower state
2016/06/23 07:49:35 [INFO] serf: EventMemberJoin: Node 144 127.0.0.1
2016/06/23 07:49:35 [INFO] serf: EventMemberJoin: Node 144.dc1 127.0.0.1
2016/06/23 07:49:35 [INFO] consul: adding LAN server Node 144 (Addr: 127.0.0.1:18144) (DC: dc1)
2016/06/23 07:49:35 [INFO] consul: adding WAN server Node 144.dc1 (Addr: 127.0.0.1:18144) (DC: dc1)
2016/06/23 07:49:35 [DEBUG] http: Request GET /v1/agent/self (7.730237ms) from=@
2016/06/23 07:49:35 [INFO] agent: requesting shutdown
2016/06/23 07:49:35 [INFO] consul: shutting down server
2016/06/23 07:49:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:35 [INFO] raft: Node at 127.0.0.1:18144 [Candidate] entering Candidate state
2016/06/23 07:49:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:35 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:35 [INFO] agent: shutdown complete
2016/06/23 07:49:35 [DEBUG] http: Shutting down http server (/tmp/consul001077606/test.sock)
--- PASS: TestHTTPServer_UnixSocket (1.39s)
=== RUN   TestHTTPServer_UnixSocket_FileExists
2016/06/23 07:49:36 [INFO] serf: EventMemberJoin: Node 145 127.0.0.1
2016/06/23 07:49:36 [INFO] serf: EventMemberJoin: Node 145.dc1 127.0.0.1
2016/06/23 07:49:36 [WARN] agent: Replacing socket "/tmp/consul962340744/test.sock"
2016/06/23 07:49:36 [INFO] raft: Node at 127.0.0.1:18145 [Follower] entering Follower state
2016/06/23 07:49:36 [INFO] consul: adding LAN server Node 145 (Addr: 127.0.0.1:18145) (DC: dc1)
2016/06/23 07:49:36 [INFO] consul: adding WAN server Node 145.dc1 (Addr: 127.0.0.1:18145) (DC: dc1)
--- PASS: TestHTTPServer_UnixSocket_FileExists (0.59s)
=== RUN   TestSetIndex
--- PASS: TestSetIndex (0.00s)
=== RUN   TestSetKnownLeader
--- PASS: TestSetKnownLeader (0.00s)
=== RUN   TestSetLastContact
--- PASS: TestSetLastContact (0.00s)
=== RUN   TestSetMeta
--- PASS: TestSetMeta (0.00s)
=== RUN   TestHTTPAPIResponseHeaders
2016/06/23 07:49:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:36 [INFO] raft: Node at 127.0.0.1:18145 [Candidate] entering Candidate state
2016/06/23 07:49:36 [INFO] raft: Node at 127.0.0.1:18146 [Follower] entering Follower state
2016/06/23 07:49:36 [INFO] serf: EventMemberJoin: Node 146 127.0.0.1
2016/06/23 07:49:36 [INFO] consul: adding LAN server Node 146 (Addr: 127.0.0.1:18146) (DC: dc1)
2016/06/23 07:49:36 [INFO] serf: EventMemberJoin: Node 146.dc1 127.0.0.1
2016/06/23 07:49:36 [INFO] consul: adding WAN server Node 146.dc1 (Addr: 127.0.0.1:18146) (DC: dc1)
2016/06/23 07:49:36 [DEBUG] http: Request GET /v1/agent/self (11.667µs) from=
2016/06/23 07:49:36 [INFO] agent: requesting shutdown
2016/06/23 07:49:36 [INFO] consul: shutting down server
2016/06/23 07:49:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:37 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:37 [DEBUG] raft: Vote granted from 127.0.0.1:18145. Tally: 1
2016/06/23 07:49:37 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:37 [INFO] raft: Node at 127.0.0.1:18145 [Leader] entering Leader state
2016/06/23 07:49:37 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:37 [INFO] consul: New leader elected: Node 145
2016/06/23 07:49:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:37 [INFO] raft: Node at 127.0.0.1:18146 [Candidate] entering Candidate state
2016/06/23 07:49:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:37 [DEBUG] raft: Node 127.0.0.1:18145 updated peer set (2): [127.0.0.1:18145]
2016/06/23 07:49:37 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:38 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:38 [INFO] agent: shutdown complete
2016/06/23 07:49:38 [DEBUG] http: Shutting down http server (127.0.0.1:18946)
--- PASS: TestHTTPAPIResponseHeaders (1.61s)
=== RUN   TestContentTypeIsJSON
2016/06/23 07:49:38 [INFO] consul: member 'Node 145' joined, marking health alive
2016/06/23 07:49:38 [INFO] raft: Node at 127.0.0.1:18147 [Follower] entering Follower state
2016/06/23 07:49:38 [INFO] serf: EventMemberJoin: Node 147 127.0.0.1
2016/06/23 07:49:38 [INFO] consul: adding LAN server Node 147 (Addr: 127.0.0.1:18147) (DC: dc1)
2016/06/23 07:49:38 [INFO] serf: EventMemberJoin: Node 147.dc1 127.0.0.1
2016/06/23 07:49:38 [INFO] consul: adding WAN server Node 147.dc1 (Addr: 127.0.0.1:18147) (DC: dc1)
2016/06/23 07:49:38 [DEBUG] http: Request GET /v1/kv/key (383.345µs) from=
2016/06/23 07:49:38 [INFO] agent: requesting shutdown
2016/06/23 07:49:38 [INFO] consul: shutting down server
2016/06/23 07:49:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:38 [INFO] raft: Node at 127.0.0.1:18147 [Candidate] entering Candidate state
2016/06/23 07:49:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:39 [DEBUG] memberlist: Potential blocking operation. Last command took 21.294319ms
2016/06/23 07:49:40 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:40 [INFO] agent: shutdown complete
2016/06/23 07:49:40 [DEBUG] http: Shutting down http server (127.0.0.1:18947)
--- PASS: TestContentTypeIsJSON (2.39s)
=== RUN   TestHTTP_wrap_obfuscateLog
2016/06/23 07:49:41 [INFO] raft: Node at 127.0.0.1:18148 [Follower] entering Follower state
2016/06/23 07:49:41 [INFO] serf: EventMemberJoin: Node 148 127.0.0.1
2016/06/23 07:49:41 [INFO] consul: adding LAN server Node 148 (Addr: 127.0.0.1:18148) (DC: dc1)
2016/06/23 07:49:41 [INFO] serf: EventMemberJoin: Node 148.dc1 127.0.0.1
2016/06/23 07:49:41 [INFO] agent: requesting shutdown
2016/06/23 07:49:41 [INFO] consul: shutting down server
2016/06/23 07:49:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:41 [INFO] raft: Node at 127.0.0.1:18148 [Candidate] entering Candidate state
2016/06/23 07:49:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:41 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:41 [INFO] agent: shutdown complete
--- PASS: TestHTTP_wrap_obfuscateLog (1.46s)
=== RUN   TestPrettyPrint
2016/06/23 07:49:42 [INFO] raft: Node at 127.0.0.1:18149 [Follower] entering Follower state
2016/06/23 07:49:42 [INFO] serf: EventMemberJoin: Node 149 127.0.0.1
2016/06/23 07:49:42 [INFO] consul: adding LAN server Node 149 (Addr: 127.0.0.1:18149) (DC: dc1)
2016/06/23 07:49:42 [INFO] serf: EventMemberJoin: Node 149.dc1 127.0.0.1
2016/06/23 07:49:42 [INFO] consul: adding WAN server Node 149.dc1 (Addr: 127.0.0.1:18149) (DC: dc1)
2016/06/23 07:49:42 [DEBUG] http: Request GET /v1/kv/key?pretty=1 (3.970788ms) from=
2016/06/23 07:49:42 [INFO] agent: requesting shutdown
2016/06/23 07:49:42 [INFO] consul: shutting down server
2016/06/23 07:49:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:42 [INFO] raft: Node at 127.0.0.1:18149 [Candidate] entering Candidate state
2016/06/23 07:49:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:43 [INFO] agent: shutdown complete
2016/06/23 07:49:43 [DEBUG] http: Shutting down http server (127.0.0.1:18949)
--- PASS: TestPrettyPrint (1.30s)
=== RUN   TestPrettyPrintBare
2016/06/23 07:49:44 [INFO] raft: Node at 127.0.0.1:18150 [Follower] entering Follower state
2016/06/23 07:49:44 [INFO] serf: EventMemberJoin: Node 150 127.0.0.1
2016/06/23 07:49:44 [INFO] consul: adding LAN server Node 150 (Addr: 127.0.0.1:18150) (DC: dc1)
2016/06/23 07:49:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:44 [INFO] raft: Node at 127.0.0.1:18150 [Candidate] entering Candidate state
2016/06/23 07:49:44 [INFO] serf: EventMemberJoin: Node 150.dc1 127.0.0.1
2016/06/23 07:49:44 [DEBUG] http: Request GET /v1/kv/key?pretty (6.874877ms) from=
2016/06/23 07:49:44 [INFO] agent: requesting shutdown
2016/06/23 07:49:44 [INFO] consul: adding WAN server Node 150.dc1 (Addr: 127.0.0.1:18150) (DC: dc1)
2016/06/23 07:49:44 [INFO] consul: shutting down server
2016/06/23 07:49:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:44 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:44 [INFO] agent: shutdown complete
2016/06/23 07:49:44 [DEBUG] http: Shutting down http server (127.0.0.1:18950)
--- PASS: TestPrettyPrintBare (1.64s)
=== RUN   TestParseSource
2016/06/23 07:49:45 [INFO] raft: Node at 127.0.0.1:18151 [Follower] entering Follower state
2016/06/23 07:49:45 [INFO] serf: EventMemberJoin: Node 151 127.0.0.1
2016/06/23 07:49:45 [INFO] consul: adding LAN server Node 151 (Addr: 127.0.0.1:18151) (DC: dc1)
2016/06/23 07:49:45 [INFO] serf: EventMemberJoin: Node 151.dc1 127.0.0.1
2016/06/23 07:49:45 [INFO] consul: adding WAN server Node 151.dc1 (Addr: 127.0.0.1:18151) (DC: dc1)
2016/06/23 07:49:45 [INFO] agent: requesting shutdown
2016/06/23 07:49:45 [INFO] consul: shutting down server
2016/06/23 07:49:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:45 [INFO] raft: Node at 127.0.0.1:18151 [Candidate] entering Candidate state
2016/06/23 07:49:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:46 [INFO] agent: shutdown complete
2016/06/23 07:49:46 [DEBUG] http: Shutting down http server (127.0.0.1:18951)
--- PASS: TestParseSource (1.53s)
=== RUN   TestParseWait
--- PASS: TestParseWait (0.00s)
=== RUN   TestParseWait_InvalidTime
--- PASS: TestParseWait_InvalidTime (0.00s)
=== RUN   TestParseWait_InvalidIndex
--- PASS: TestParseWait_InvalidIndex (0.00s)
=== RUN   TestParseConsistency
--- PASS: TestParseConsistency (0.00s)
=== RUN   TestParseConsistency_Invalid
--- PASS: TestParseConsistency_Invalid (0.00s)
=== RUN   TestACLResolution
2016/06/23 07:49:47 [INFO] raft: Node at 127.0.0.1:18152 [Follower] entering Follower state
2016/06/23 07:49:47 [INFO] serf: EventMemberJoin: Node 152 127.0.0.1
2016/06/23 07:49:47 [INFO] serf: EventMemberJoin: Node 152.dc1 127.0.0.1
2016/06/23 07:49:47 [INFO] consul: adding LAN server Node 152 (Addr: 127.0.0.1:18152) (DC: dc1)
2016/06/23 07:49:47 [INFO] consul: adding WAN server Node 152.dc1 (Addr: 127.0.0.1:18152) (DC: dc1)
2016/06/23 07:49:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:47 [INFO] raft: Node at 127.0.0.1:18152 [Candidate] entering Candidate state
2016/06/23 07:49:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:47 [DEBUG] raft: Vote granted from 127.0.0.1:18152. Tally: 1
2016/06/23 07:49:47 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:47 [INFO] raft: Node at 127.0.0.1:18152 [Leader] entering Leader state
2016/06/23 07:49:47 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:47 [INFO] consul: New leader elected: Node 152
2016/06/23 07:49:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:48 [DEBUG] raft: Node 127.0.0.1:18152 updated peer set (2): [127.0.0.1:18152]
2016/06/23 07:49:48 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:48 [INFO] consul: member 'Node 152' joined, marking health alive
2016/06/23 07:49:48 [INFO] agent: requesting shutdown
2016/06/23 07:49:48 [INFO] consul: shutting down server
2016/06/23 07:49:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:49 [INFO] agent: shutdown complete
2016/06/23 07:49:49 [DEBUG] http: Shutting down http server (SCADA)
--- PASS: TestACLResolution (2.69s)
=== RUN   TestScadaHTTP
2016/06/23 07:49:49 [INFO] raft: Node at 127.0.0.1:18153 [Follower] entering Follower state
2016/06/23 07:49:49 [INFO] serf: EventMemberJoin: Node 153 127.0.0.1
2016/06/23 07:49:49 [INFO] consul: adding LAN server Node 153 (Addr: 127.0.0.1:18153) (DC: dc1)
2016/06/23 07:49:49 [INFO] serf: EventMemberJoin: Node 153.dc1 127.0.0.1
2016/06/23 07:49:49 [INFO] consul: adding WAN server Node 153.dc1 (Addr: 127.0.0.1:18153) (DC: dc1)
2016/06/23 07:49:49 [INFO] agent: requesting shutdown
2016/06/23 07:49:49 [INFO] consul: shutting down server
2016/06/23 07:49:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:49 [INFO] raft: Node at 127.0.0.1:18153 [Candidate] entering Candidate state
2016/06/23 07:49:49 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:50 [INFO] agent: shutdown complete
--- PASS: TestScadaHTTP (1.15s)
=== RUN   TestEnableWebUI
2016/06/23 07:49:51 [INFO] raft: Node at 127.0.0.1:18154 [Follower] entering Follower state
2016/06/23 07:49:51 [INFO] serf: EventMemberJoin: Node 154 127.0.0.1
2016/06/23 07:49:51 [INFO] serf: EventMemberJoin: Node 154.dc1 127.0.0.1
2016/06/23 07:49:51 [INFO] consul: adding LAN server Node 154 (Addr: 127.0.0.1:18154) (DC: dc1)
2016/06/23 07:49:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:51 [INFO] raft: Node at 127.0.0.1:18154 [Candidate] entering Candidate state
2016/06/23 07:49:51 [INFO] consul: adding WAN server Node 154.dc1 (Addr: 127.0.0.1:18154) (DC: dc1)
2016/06/23 07:49:51 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:51 [DEBUG] raft: Vote granted from 127.0.0.1:18154. Tally: 1
2016/06/23 07:49:51 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:51 [INFO] raft: Node at 127.0.0.1:18154 [Leader] entering Leader state
2016/06/23 07:49:51 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:51 [INFO] consul: New leader elected: Node 154
2016/06/23 07:49:51 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:51 [DEBUG] raft: Node 127.0.0.1:18154 updated peer set (2): [127.0.0.1:18154]
2016/06/23 07:49:52 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:52 [INFO] consul: member 'Node 154' joined, marking health alive
2016/06/23 07:49:53 [INFO] agent: requesting shutdown
2016/06/23 07:49:53 [INFO] consul: shutting down server
2016/06/23 07:49:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:53 [INFO] agent: shutdown complete
2016/06/23 07:49:53 [DEBUG] http: Shutting down http server (127.0.0.1:18954)
--- PASS: TestEnableWebUI (3.59s)
=== RUN   TestAgent_LoadKeyrings
2016/06/23 07:49:54 [INFO] raft: Node at 127.0.0.1:18155 [Follower] entering Follower state
2016/06/23 07:49:54 [INFO] serf: EventMemberJoin: Node 155 127.0.0.1
2016/06/23 07:49:54 [INFO] consul: adding LAN server Node 155 (Addr: 127.0.0.1:18155) (DC: dc1)
2016/06/23 07:49:54 [INFO] serf: EventMemberJoin: Node 155.dc1 127.0.0.1
2016/06/23 07:49:54 [INFO] consul: adding WAN server Node 155.dc1 (Addr: 127.0.0.1:18155) (DC: dc1)
2016/06/23 07:49:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:54 [INFO] raft: Node at 127.0.0.1:18155 [Candidate] entering Candidate state
2016/06/23 07:49:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:49:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:38560
2016/06/23 07:49:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:55 [DEBUG] raft: Vote granted from 127.0.0.1:18155. Tally: 1
2016/06/23 07:49:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:55 [INFO] raft: Node at 127.0.0.1:18155 [Leader] entering Leader state
2016/06/23 07:49:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:55 [INFO] consul: New leader elected: Node 155
2016/06/23 07:49:55 [INFO] serf: EventMemberJoin: Node 156 127.0.0.1
2016/06/23 07:49:55 [INFO] consul: adding LAN server Node 156 (Addr: 127.0.0.1:18156) (DC: dc1)
2016/06/23 07:49:55 [INFO] raft: Node at 127.0.0.1:18156 [Follower] entering Follower state
2016/06/23 07:49:55 [INFO] serf: EventMemberJoin: Node 156.dc1 127.0.0.1
2016/06/23 07:49:55 [INFO] consul: adding WAN server Node 156.dc1 (Addr: 127.0.0.1:18156) (DC: dc1)
2016/06/23 07:49:55 [INFO] serf: EventMemberJoin: Node 157 127.0.0.1
2016/06/23 07:49:55 [INFO] agent: requesting shutdown
2016/06/23 07:49:55 [INFO] consul: shutting down client
2016/06/23 07:49:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:55 [INFO] raft: Node at 127.0.0.1:18156 [Candidate] entering Candidate state
2016/06/23 07:49:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:55 [INFO] agent: shutdown complete
2016/06/23 07:49:55 [INFO] agent: requesting shutdown
2016/06/23 07:49:55 [INFO] consul: shutting down server
2016/06/23 07:49:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:56 [DEBUG] raft: Node 127.0.0.1:18155 updated peer set (2): [127.0.0.1:18155]
2016/06/23 07:49:56 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:56 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:56 [INFO] agent: shutdown complete
2016/06/23 07:49:56 [INFO] agent: requesting shutdown
2016/06/23 07:49:56 [INFO] consul: shutting down server
2016/06/23 07:49:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:56 [ERR] consul: ACL initialization failed: failed to create master token: leadership lost while committing log
2016/06/23 07:49:56 [ERR] consul: failed to establish leadership: failed to create master token: leadership lost while committing log
2016/06/23 07:49:56 [INFO] agent: shutdown complete
--- PASS: TestAgent_LoadKeyrings (2.89s)
=== RUN   TestAgent_InitKeyring
--- PASS: TestAgent_InitKeyring (0.00s)
=== RUN   TestAgentKeyring_ACL
2016/06/23 07:49:57 [INFO] raft: Node at 127.0.0.1:18158 [Follower] entering Follower state
2016/06/23 07:49:57 [INFO] serf: EventMemberJoin: Node 158 127.0.0.1
2016/06/23 07:49:57 [INFO] serf: EventMemberJoin: Node 158.dc1 127.0.0.1
2016/06/23 07:49:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:49:57 [INFO] consul: adding WAN server Node 158.dc1 (Addr: 127.0.0.1:18158) (DC: dc1)
2016/06/23 07:49:57 [INFO] raft: Node at 127.0.0.1:18158 [Candidate] entering Candidate state
2016/06/23 07:49:57 [INFO] consul: adding LAN server Node 158 (Addr: 127.0.0.1:18158) (DC: dc1)
2016/06/23 07:49:58 [DEBUG] raft: Votes needed: 1
2016/06/23 07:49:58 [DEBUG] raft: Vote granted from 127.0.0.1:18158. Tally: 1
2016/06/23 07:49:58 [INFO] raft: Election won. Tally: 1
2016/06/23 07:49:58 [INFO] raft: Node at 127.0.0.1:18158 [Leader] entering Leader state
2016/06/23 07:49:58 [INFO] consul: cluster leadership acquired
2016/06/23 07:49:58 [INFO] consul: New leader elected: Node 158
2016/06/23 07:49:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:49:58 [DEBUG] raft: Node 127.0.0.1:18158 updated peer set (2): [127.0.0.1:18158]
2016/06/23 07:49:59 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:49:59 [INFO] consul: member 'Node 158' joined, marking health alive
2016/06/23 07:49:59 [INFO] serf: Received list-keys query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158.dc1
2016/06/23 07:49:59 [INFO] serf: Received list-keys query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158
2016/06/23 07:49:59 [INFO] serf: Received install-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158.dc1
2016/06/23 07:49:59 [INFO] serf: Received install-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158
2016/06/23 07:49:59 [INFO] serf: Received use-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158.dc1
2016/06/23 07:49:59 [INFO] serf: Received use-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158
2016/06/23 07:49:59 [INFO] serf: Received remove-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158.dc1
2016/06/23 07:49:59 [INFO] serf: Received remove-key query
2016/06/23 07:49:59 [DEBUG] serf: messageQueryResponseType: Node 158
2016/06/23 07:49:59 [INFO] agent: requesting shutdown
2016/06/23 07:49:59 [INFO] consul: shutting down server
2016/06/23 07:49:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:49:59 [INFO] agent: shutdown complete
--- PASS: TestAgentKeyring_ACL (3.08s)
=== RUN   TestKVSEndpoint_PUT_GET_DELETE
2016/06/23 07:50:00 [INFO] raft: Node at 127.0.0.1:18159 [Follower] entering Follower state
2016/06/23 07:50:00 [INFO] serf: EventMemberJoin: Node 159 127.0.0.1
2016/06/23 07:50:00 [INFO] consul: adding LAN server Node 159 (Addr: 127.0.0.1:18159) (DC: dc1)
2016/06/23 07:50:00 [INFO] serf: EventMemberJoin: Node 159.dc1 127.0.0.1
2016/06/23 07:50:00 [INFO] consul: adding WAN server Node 159.dc1 (Addr: 127.0.0.1:18159) (DC: dc1)
2016/06/23 07:50:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:00 [INFO] raft: Node at 127.0.0.1:18159 [Candidate] entering Candidate state
2016/06/23 07:50:00 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:00 [DEBUG] raft: Vote granted from 127.0.0.1:18159. Tally: 1
2016/06/23 07:50:00 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:00 [INFO] raft: Node at 127.0.0.1:18159 [Leader] entering Leader state
2016/06/23 07:50:00 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:00 [INFO] consul: New leader elected: Node 159
2016/06/23 07:50:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:01 [DEBUG] raft: Node 127.0.0.1:18159 updated peer set (2): [127.0.0.1:18159]
2016/06/23 07:50:01 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:01 [INFO] consul: member 'Node 159' joined, marking health alive
2016/06/23 07:50:02 [DEBUG] memberlist: Potential blocking operation. Last command took 14.750118ms
2016/06/23 07:50:04 [INFO] agent: requesting shutdown
2016/06/23 07:50:04 [INFO] consul: shutting down server
2016/06/23 07:50:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:04 [INFO] agent: shutdown complete
2016/06/23 07:50:04 [DEBUG] http: Shutting down http server (127.0.0.1:18959)
--- PASS: TestKVSEndpoint_PUT_GET_DELETE (4.56s)
=== RUN   TestKVSEndpoint_Recurse
2016/06/23 07:50:05 [INFO] raft: Node at 127.0.0.1:18160 [Follower] entering Follower state
2016/06/23 07:50:05 [INFO] serf: EventMemberJoin: Node 160 127.0.0.1
2016/06/23 07:50:05 [INFO] consul: adding LAN server Node 160 (Addr: 127.0.0.1:18160) (DC: dc1)
2016/06/23 07:50:05 [INFO] serf: EventMemberJoin: Node 160.dc1 127.0.0.1
2016/06/23 07:50:05 [INFO] consul: adding WAN server Node 160.dc1 (Addr: 127.0.0.1:18160) (DC: dc1)
2016/06/23 07:50:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:05 [INFO] raft: Node at 127.0.0.1:18160 [Candidate] entering Candidate state
2016/06/23 07:50:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:05 [DEBUG] raft: Vote granted from 127.0.0.1:18160. Tally: 1
2016/06/23 07:50:05 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:05 [INFO] raft: Node at 127.0.0.1:18160 [Leader] entering Leader state
2016/06/23 07:50:05 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:05 [INFO] consul: New leader elected: Node 160
2016/06/23 07:50:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:06 [DEBUG] raft: Node 127.0.0.1:18160 updated peer set (2): [127.0.0.1:18160]
2016/06/23 07:50:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:06 [INFO] consul: member 'Node 160' joined, marking health alive
2016/06/23 07:50:07 [DEBUG] memberlist: Potential blocking operation. Last command took 45.469059ms
2016/06/23 07:50:07 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:50:07 [DEBUG] memberlist: TCP connection from=127.0.0.1:51078
2016/06/23 07:50:08 [INFO] agent: requesting shutdown
2016/06/23 07:50:08 [INFO] consul: shutting down server
2016/06/23 07:50:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:09 [INFO] agent: shutdown complete
2016/06/23 07:50:09 [DEBUG] http: Shutting down http server (127.0.0.1:18960)
--- PASS: TestKVSEndpoint_Recurse (4.61s)
=== RUN   TestKVSEndpoint_DELETE_CAS
2016/06/23 07:50:09 [INFO] raft: Node at 127.0.0.1:18161 [Follower] entering Follower state
2016/06/23 07:50:09 [INFO] serf: EventMemberJoin: Node 161 127.0.0.1
2016/06/23 07:50:09 [INFO] consul: adding LAN server Node 161 (Addr: 127.0.0.1:18161) (DC: dc1)
2016/06/23 07:50:09 [INFO] serf: EventMemberJoin: Node 161.dc1 127.0.0.1
2016/06/23 07:50:09 [INFO] consul: adding WAN server Node 161.dc1 (Addr: 127.0.0.1:18161) (DC: dc1)
2016/06/23 07:50:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:09 [INFO] raft: Node at 127.0.0.1:18161 [Candidate] entering Candidate state
2016/06/23 07:50:10 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:10 [DEBUG] raft: Vote granted from 127.0.0.1:18161. Tally: 1
2016/06/23 07:50:10 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:10 [INFO] raft: Node at 127.0.0.1:18161 [Leader] entering Leader state
2016/06/23 07:50:10 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:10 [INFO] consul: New leader elected: Node 161
2016/06/23 07:50:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:10 [DEBUG] raft: Node 127.0.0.1:18161 updated peer set (2): [127.0.0.1:18161]
2016/06/23 07:50:11 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:11 [DEBUG] memberlist: Potential blocking operation. Last command took 74.859625ms
2016/06/23 07:50:11 [INFO] consul: member 'Node 161' joined, marking health alive
2016/06/23 07:50:12 [INFO] agent: requesting shutdown
2016/06/23 07:50:12 [INFO] consul: shutting down server
2016/06/23 07:50:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:12 [INFO] agent: shutdown complete
2016/06/23 07:50:12 [DEBUG] http: Shutting down http server (127.0.0.1:18961)
--- PASS: TestKVSEndpoint_DELETE_CAS (3.70s)
=== RUN   TestKVSEndpoint_CAS
2016/06/23 07:50:13 [INFO] raft: Node at 127.0.0.1:18162 [Follower] entering Follower state
2016/06/23 07:50:13 [INFO] serf: EventMemberJoin: Node 162 127.0.0.1
2016/06/23 07:50:13 [INFO] consul: adding LAN server Node 162 (Addr: 127.0.0.1:18162) (DC: dc1)
2016/06/23 07:50:13 [INFO] serf: EventMemberJoin: Node 162.dc1 127.0.0.1
2016/06/23 07:50:13 [INFO] consul: adding WAN server Node 162.dc1 (Addr: 127.0.0.1:18162) (DC: dc1)
2016/06/23 07:50:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:13 [INFO] raft: Node at 127.0.0.1:18162 [Candidate] entering Candidate state
2016/06/23 07:50:14 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:14 [DEBUG] raft: Vote granted from 127.0.0.1:18162. Tally: 1
2016/06/23 07:50:14 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:14 [INFO] raft: Node at 127.0.0.1:18162 [Leader] entering Leader state
2016/06/23 07:50:14 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:14 [INFO] consul: New leader elected: Node 162
2016/06/23 07:50:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:14 [DEBUG] raft: Node 127.0.0.1:18162 updated peer set (2): [127.0.0.1:18162]
2016/06/23 07:50:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:14 [INFO] consul: member 'Node 162' joined, marking health alive
2016/06/23 07:50:16 [DEBUG] memberlist: Potential blocking operation. Last command took 36.72179ms
2016/06/23 07:50:16 [INFO] agent: requesting shutdown
2016/06/23 07:50:16 [INFO] consul: shutting down server
2016/06/23 07:50:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:16 [INFO] agent: shutdown complete
2016/06/23 07:50:16 [DEBUG] http: Shutting down http server (127.0.0.1:18962)
--- PASS: TestKVSEndpoint_CAS (3.53s)
=== RUN   TestKVSEndpoint_ListKeys
2016/06/23 07:50:16 [INFO] raft: Node at 127.0.0.1:18163 [Follower] entering Follower state
2016/06/23 07:50:16 [INFO] serf: EventMemberJoin: Node 163 127.0.0.1
2016/06/23 07:50:16 [INFO] consul: adding LAN server Node 163 (Addr: 127.0.0.1:18163) (DC: dc1)
2016/06/23 07:50:16 [INFO] serf: EventMemberJoin: Node 163.dc1 127.0.0.1
2016/06/23 07:50:16 [INFO] consul: adding WAN server Node 163.dc1 (Addr: 127.0.0.1:18163) (DC: dc1)
2016/06/23 07:50:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:17 [INFO] raft: Node at 127.0.0.1:18163 [Candidate] entering Candidate state
2016/06/23 07:50:17 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:17 [DEBUG] raft: Vote granted from 127.0.0.1:18163. Tally: 1
2016/06/23 07:50:17 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:17 [INFO] raft: Node at 127.0.0.1:18163 [Leader] entering Leader state
2016/06/23 07:50:17 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:17 [INFO] consul: New leader elected: Node 163
2016/06/23 07:50:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:18 [DEBUG] raft: Node 127.0.0.1:18163 updated peer set (2): [127.0.0.1:18163]
2016/06/23 07:50:18 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:18 [DEBUG] memberlist: Potential blocking operation. Last command took 11.390349ms
2016/06/23 07:50:18 [INFO] consul: member 'Node 163' joined, marking health alive
2016/06/23 07:50:20 [INFO] agent: requesting shutdown
2016/06/23 07:50:20 [INFO] consul: shutting down server
2016/06/23 07:50:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:20 [INFO] agent: shutdown complete
2016/06/23 07:50:20 [DEBUG] http: Shutting down http server (127.0.0.1:18963)
--- PASS: TestKVSEndpoint_ListKeys (4.06s)
=== RUN   TestKVSEndpoint_AcquireRelease
2016/06/23 07:50:21 [INFO] serf: EventMemberJoin: Node 164 127.0.0.1
2016/06/23 07:50:21 [INFO] raft: Node at 127.0.0.1:18164 [Follower] entering Follower state
2016/06/23 07:50:21 [INFO] consul: adding LAN server Node 164 (Addr: 127.0.0.1:18164) (DC: dc1)
2016/06/23 07:50:21 [INFO] serf: EventMemberJoin: Node 164.dc1 127.0.0.1
2016/06/23 07:50:21 [INFO] consul: adding WAN server Node 164.dc1 (Addr: 127.0.0.1:18164) (DC: dc1)
2016/06/23 07:50:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:21 [INFO] raft: Node at 127.0.0.1:18164 [Candidate] entering Candidate state
2016/06/23 07:50:21 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:21 [DEBUG] raft: Vote granted from 127.0.0.1:18164. Tally: 1
2016/06/23 07:50:21 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:21 [INFO] raft: Node at 127.0.0.1:18164 [Leader] entering Leader state
2016/06/23 07:50:21 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:21 [INFO] consul: New leader elected: Node 164
2016/06/23 07:50:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:22 [DEBUG] raft: Node 127.0.0.1:18164 updated peer set (2): [127.0.0.1:18164]
2016/06/23 07:50:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:22 [INFO] consul: member 'Node 164' joined, marking health alive
2016/06/23 07:50:23 [DEBUG] memberlist: Potential blocking operation. Last command took 29.037222ms
2016/06/23 07:50:23 [INFO] agent: requesting shutdown
2016/06/23 07:50:23 [INFO] consul: shutting down server
2016/06/23 07:50:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:24 [INFO] agent: shutdown complete
2016/06/23 07:50:24 [DEBUG] http: Shutting down http server (127.0.0.1:18964)
--- PASS: TestKVSEndpoint_AcquireRelease (3.87s)
=== RUN   TestKVSEndpoint_GET_Raw
2016/06/23 07:50:24 [INFO] raft: Node at 127.0.0.1:18165 [Follower] entering Follower state
2016/06/23 07:50:24 [INFO] serf: EventMemberJoin: Node 165 127.0.0.1
2016/06/23 07:50:24 [INFO] consul: adding LAN server Node 165 (Addr: 127.0.0.1:18165) (DC: dc1)
2016/06/23 07:50:24 [INFO] serf: EventMemberJoin: Node 165.dc1 127.0.0.1
2016/06/23 07:50:24 [INFO] consul: adding WAN server Node 165.dc1 (Addr: 127.0.0.1:18165) (DC: dc1)
2016/06/23 07:50:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:24 [INFO] raft: Node at 127.0.0.1:18165 [Candidate] entering Candidate state
2016/06/23 07:50:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:25 [DEBUG] raft: Vote granted from 127.0.0.1:18165. Tally: 1
2016/06/23 07:50:25 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:25 [INFO] raft: Node at 127.0.0.1:18165 [Leader] entering Leader state
2016/06/23 07:50:25 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:25 [INFO] consul: New leader elected: Node 165
2016/06/23 07:50:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:26 [DEBUG] raft: Node 127.0.0.1:18165 updated peer set (2): [127.0.0.1:18165]
2016/06/23 07:50:26 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:26 [INFO] consul: member 'Node 165' joined, marking health alive
2016/06/23 07:50:27 [INFO] agent: requesting shutdown
2016/06/23 07:50:27 [INFO] consul: shutting down server
2016/06/23 07:50:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:27 [INFO] agent: shutdown complete
2016/06/23 07:50:27 [DEBUG] http: Shutting down http server (127.0.0.1:18965)
--- PASS: TestKVSEndpoint_GET_Raw (3.29s)
=== RUN   TestKVSEndpoint_PUT_ConflictingFlags
2016/06/23 07:50:27 [DEBUG] memberlist: Potential blocking operation. Last command took 31.742639ms
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:18166 [Follower] entering Follower state
2016/06/23 07:50:28 [INFO] serf: EventMemberJoin: Node 166 127.0.0.1
2016/06/23 07:50:28 [INFO] consul: adding LAN server Node 166 (Addr: 127.0.0.1:18166) (DC: dc1)
2016/06/23 07:50:28 [INFO] serf: EventMemberJoin: Node 166.dc1 127.0.0.1
2016/06/23 07:50:28 [INFO] consul: adding WAN server Node 166.dc1 (Addr: 127.0.0.1:18166) (DC: dc1)
2016/06/23 07:50:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:18166 [Candidate] entering Candidate state
2016/06/23 07:50:28 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:28 [DEBUG] raft: Vote granted from 127.0.0.1:18166. Tally: 1
2016/06/23 07:50:28 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:18166 [Leader] entering Leader state
2016/06/23 07:50:28 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:28 [INFO] consul: New leader elected: Node 166
2016/06/23 07:50:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:29 [DEBUG] raft: Node 127.0.0.1:18166 updated peer set (2): [127.0.0.1:18166]
2016/06/23 07:50:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:29 [INFO] consul: member 'Node 166' joined, marking health alive
2016/06/23 07:50:29 [INFO] agent: requesting shutdown
2016/06/23 07:50:29 [INFO] consul: shutting down server
2016/06/23 07:50:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:30 [DEBUG] memberlist: Potential blocking operation. Last command took 73.902595ms
2016/06/23 07:50:30 [INFO] agent: shutdown complete
2016/06/23 07:50:30 [DEBUG] http: Shutting down http server (127.0.0.1:18966)
--- PASS: TestKVSEndpoint_PUT_ConflictingFlags (2.69s)
=== RUN   TestKVSEndpoint_DELETE_ConflictingFlags
2016/06/23 07:50:30 [INFO] raft: Node at 127.0.0.1:18167 [Follower] entering Follower state
2016/06/23 07:50:30 [INFO] serf: EventMemberJoin: Node 167 127.0.0.1
2016/06/23 07:50:30 [INFO] consul: adding LAN server Node 167 (Addr: 127.0.0.1:18167) (DC: dc1)
2016/06/23 07:50:30 [INFO] serf: EventMemberJoin: Node 167.dc1 127.0.0.1
2016/06/23 07:50:30 [INFO] consul: adding WAN server Node 167.dc1 (Addr: 127.0.0.1:18167) (DC: dc1)
2016/06/23 07:50:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:30 [INFO] raft: Node at 127.0.0.1:18167 [Candidate] entering Candidate state
2016/06/23 07:50:31 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:31 [DEBUG] raft: Vote granted from 127.0.0.1:18167. Tally: 1
2016/06/23 07:50:31 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:31 [INFO] raft: Node at 127.0.0.1:18167 [Leader] entering Leader state
2016/06/23 07:50:31 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:31 [INFO] consul: New leader elected: Node 167
2016/06/23 07:50:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:31 [DEBUG] raft: Node 127.0.0.1:18167 updated peer set (2): [127.0.0.1:18167]
2016/06/23 07:50:31 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:31 [INFO] consul: member 'Node 167' joined, marking health alive
2016/06/23 07:50:31 [INFO] agent: requesting shutdown
2016/06/23 07:50:31 [INFO] consul: shutting down server
2016/06/23 07:50:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:32 [INFO] agent: shutdown complete
2016/06/23 07:50:32 [DEBUG] http: Shutting down http server (127.0.0.1:18967)
--- PASS: TestKVSEndpoint_DELETE_ConflictingFlags (1.98s)
=== RUN   TestAgentAntiEntropy_Services
2016/06/23 07:50:32 [ERR] scada-client: failed to handshake: invalid token
2016/06/23 07:50:32 [INFO] raft: Node at 127.0.0.1:18168 [Follower] entering Follower state
2016/06/23 07:50:32 [INFO] serf: EventMemberJoin: Node 168 127.0.0.1
2016/06/23 07:50:32 [INFO] consul: adding LAN server Node 168 (Addr: 127.0.0.1:18168) (DC: dc1)
2016/06/23 07:50:32 [INFO] serf: EventMemberJoin: Node 168.dc1 127.0.0.1
2016/06/23 07:50:32 [INFO] consul: adding WAN server Node 168.dc1 (Addr: 127.0.0.1:18168) (DC: dc1)
2016/06/23 07:50:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:32 [INFO] raft: Node at 127.0.0.1:18168 [Candidate] entering Candidate state
2016/06/23 07:50:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:33 [DEBUG] raft: Vote granted from 127.0.0.1:18168. Tally: 1
2016/06/23 07:50:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:33 [INFO] raft: Node at 127.0.0.1:18168 [Leader] entering Leader state
2016/06/23 07:50:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:33 [INFO] consul: New leader elected: Node 168
2016/06/23 07:50:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:33 [DEBUG] raft: Node 127.0.0.1:18168 updated peer set (2): [127.0.0.1:18168]
2016/06/23 07:50:33 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:34 [INFO] consul: member 'Node 168' joined, marking health alive
2016/06/23 07:50:36 [INFO] agent: requesting shutdown
2016/06/23 07:50:36 [INFO] consul: shutting down server
2016/06/23 07:50:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:36 [INFO] agent: Synced service 'api'
2016/06/23 07:50:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:37 [DEBUG] memberlist: Potential blocking operation. Last command took 51.883921ms
2016/06/23 07:50:37 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:50:37 [ERR] agent: failed to sync changes: leadership lost while committing log
2016/06/23 07:50:37 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_Services (4.87s)
	local_test.go:126: bad: map[]
=== RUN   TestAgentAntiEntropy_EnableTagOverride
2016/06/23 07:50:37 [INFO] raft: Node at 127.0.0.1:18169 [Follower] entering Follower state
2016/06/23 07:50:37 [INFO] serf: EventMemberJoin: Node 169 127.0.0.1
2016/06/23 07:50:37 [INFO] consul: adding LAN server Node 169 (Addr: 127.0.0.1:18169) (DC: dc1)
2016/06/23 07:50:37 [INFO] serf: EventMemberJoin: Node 169.dc1 127.0.0.1
2016/06/23 07:50:37 [INFO] consul: adding WAN server Node 169.dc1 (Addr: 127.0.0.1:18169) (DC: dc1)
2016/06/23 07:50:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:37 [INFO] raft: Node at 127.0.0.1:18169 [Candidate] entering Candidate state
2016/06/23 07:50:38 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:38 [DEBUG] raft: Vote granted from 127.0.0.1:18169. Tally: 1
2016/06/23 07:50:38 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:38 [INFO] raft: Node at 127.0.0.1:18169 [Leader] entering Leader state
2016/06/23 07:50:38 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:38 [INFO] consul: New leader elected: Node 169
2016/06/23 07:50:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:38 [DEBUG] raft: Node 127.0.0.1:18169 updated peer set (2): [127.0.0.1:18169]
2016/06/23 07:50:38 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:39 [INFO] consul: member 'Node 169' joined, marking health alive
2016/06/23 07:50:40 [INFO] agent: requesting shutdown
2016/06/23 07:50:40 [INFO] consul: shutting down server
2016/06/23 07:50:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:40 [INFO] agent: Synced service 'consul'
2016/06/23 07:50:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:40 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:50:40 [ERR] agent: failed to sync changes: leadership lost while committing log
2016/06/23 07:50:40 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_EnableTagOverride (3.52s)
	local_test.go:253: bad: &{svc_id1 svc1 [tag1_mod]  7100 true {0 0}} &{svc_id1 svc1 [tag1_mod]  6100 true {0 0}}
=== RUN   TestAgentAntiEntropy_Services_WithChecks
2016/06/23 07:50:41 [INFO] raft: Node at 127.0.0.1:18170 [Follower] entering Follower state
2016/06/23 07:50:41 [INFO] serf: EventMemberJoin: Node 170 127.0.0.1
2016/06/23 07:50:41 [INFO] consul: adding LAN server Node 170 (Addr: 127.0.0.1:18170) (DC: dc1)
2016/06/23 07:50:41 [INFO] serf: EventMemberJoin: Node 170.dc1 127.0.0.1
2016/06/23 07:50:41 [INFO] consul: adding WAN server Node 170.dc1 (Addr: 127.0.0.1:18170) (DC: dc1)
2016/06/23 07:50:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:41 [INFO] raft: Node at 127.0.0.1:18170 [Candidate] entering Candidate state
2016/06/23 07:50:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:42 [DEBUG] raft: Vote granted from 127.0.0.1:18170. Tally: 1
2016/06/23 07:50:42 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:42 [INFO] raft: Node at 127.0.0.1:18170 [Leader] entering Leader state
2016/06/23 07:50:42 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:42 [INFO] consul: New leader elected: Node 170
2016/06/23 07:50:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:42 [DEBUG] raft: Node 127.0.0.1:18170 updated peer set (2): [127.0.0.1:18170]
2016/06/23 07:50:42 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:43 [INFO] consul: member 'Node 170' joined, marking health alive
2016/06/23 07:50:43 [INFO] agent: Synced service 'mysql'
2016/06/23 07:50:43 [DEBUG] memberlist: Potential blocking operation. Last command took 68.367426ms
2016/06/23 07:50:44 [INFO] agent: Synced service 'redis'
2016/06/23 07:50:44 [INFO] agent: requesting shutdown
2016/06/23 07:50:44 [INFO] consul: shutting down server
2016/06/23 07:50:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:44 [INFO] agent: shutdown complete
--- PASS: TestAgentAntiEntropy_Services_WithChecks (3.72s)
=== RUN   TestAgentAntiEntropy_Services_ACLDeny
2016/06/23 07:50:44 [INFO] raft: Node at 127.0.0.1:18171 [Follower] entering Follower state
2016/06/23 07:50:44 [INFO] serf: EventMemberJoin: Node 171 127.0.0.1
2016/06/23 07:50:44 [INFO] consul: adding LAN server Node 171 (Addr: 127.0.0.1:18171) (DC: dc1)
2016/06/23 07:50:44 [INFO] serf: EventMemberJoin: Node 171.dc1 127.0.0.1
2016/06/23 07:50:44 [INFO] consul: adding WAN server Node 171.dc1 (Addr: 127.0.0.1:18171) (DC: dc1)
2016/06/23 07:50:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:44 [INFO] raft: Node at 127.0.0.1:18171 [Candidate] entering Candidate state
2016/06/23 07:50:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:45 [DEBUG] raft: Vote granted from 127.0.0.1:18171. Tally: 1
2016/06/23 07:50:45 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:45 [INFO] raft: Node at 127.0.0.1:18171 [Leader] entering Leader state
2016/06/23 07:50:45 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:45 [INFO] consul: New leader elected: Node 171
2016/06/23 07:50:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:45 [DEBUG] raft: Node 127.0.0.1:18171 updated peer set (2): [127.0.0.1:18171]
2016/06/23 07:50:45 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:46 [INFO] consul: member 'Node 171' joined, marking health alive
2016/06/23 07:50:46 [DEBUG] memberlist: Potential blocking operation. Last command took 47.732461ms
2016/06/23 07:50:46 [INFO] agent: requesting shutdown
2016/06/23 07:50:46 [INFO] consul: shutting down server
2016/06/23 07:50:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:46 [INFO] agent: Synced service 'consul'
2016/06/23 07:50:46 [WARN] consul.catalog: Register of service 'mysql' on 'Node 171' denied due to ACLs
2016/06/23 07:50:46 [WARN] agent: Service 'mysql' registration blocked by ACLs
2016/06/23 07:50:47 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:50:47 [ERR] agent: failed to sync changes: leadership lost while committing log
2016/06/23 07:50:47 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_Services_ACLDeny (2.87s)
	local_test.go:467: bad: map[consul:0x122c0c40]
=== RUN   TestAgentAntiEntropy_Checks
2016/06/23 07:50:47 [INFO] raft: Node at 127.0.0.1:18172 [Follower] entering Follower state
2016/06/23 07:50:47 [INFO] serf: EventMemberJoin: Node 172 127.0.0.1
2016/06/23 07:50:47 [INFO] consul: adding LAN server Node 172 (Addr: 127.0.0.1:18172) (DC: dc1)
2016/06/23 07:50:47 [INFO] serf: EventMemberJoin: Node 172.dc1 127.0.0.1
2016/06/23 07:50:47 [INFO] consul: adding WAN server Node 172.dc1 (Addr: 127.0.0.1:18172) (DC: dc1)
2016/06/23 07:50:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:47 [INFO] raft: Node at 127.0.0.1:18172 [Candidate] entering Candidate state
2016/06/23 07:50:48 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:48 [DEBUG] raft: Vote granted from 127.0.0.1:18172. Tally: 1
2016/06/23 07:50:48 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:48 [INFO] raft: Node at 127.0.0.1:18172 [Leader] entering Leader state
2016/06/23 07:50:48 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:48 [INFO] consul: New leader elected: Node 172
2016/06/23 07:50:48 [DEBUG] memberlist: Potential blocking operation. Last command took 66.429033ms
2016/06/23 07:50:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:48 [DEBUG] raft: Node 127.0.0.1:18172 updated peer set (2): [127.0.0.1:18172]
2016/06/23 07:50:49 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:49 [INFO] consul: member 'Node 172' joined, marking health alive
2016/06/23 07:50:50 [INFO] agent: Synced service 'consul'
2016/06/23 07:50:50 [INFO] agent: requesting shutdown
2016/06/23 07:50:50 [INFO] consul: shutting down server
2016/06/23 07:50:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:50 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:50:50 [ERR] agent: failed to sync changes: leadership lost while committing log
2016/06/23 07:50:50 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_Checks (3.75s)
	local_test.go:594: bad: {[0x126ce410 0x126ce0f0 0x11615810 0x1215e230] {8 0 true}}
=== RUN   TestAgentAntiEntropy_Check_DeferSync
2016/06/23 07:50:51 [INFO] raft: Node at 127.0.0.1:18173 [Follower] entering Follower state
2016/06/23 07:50:51 [INFO] serf: EventMemberJoin: Node 173 127.0.0.1
2016/06/23 07:50:51 [INFO] consul: adding LAN server Node 173 (Addr: 127.0.0.1:18173) (DC: dc1)
2016/06/23 07:50:51 [INFO] serf: EventMemberJoin: Node 173.dc1 127.0.0.1
2016/06/23 07:50:51 [INFO] consul: adding WAN server Node 173.dc1 (Addr: 127.0.0.1:18173) (DC: dc1)
2016/06/23 07:50:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:51 [INFO] raft: Node at 127.0.0.1:18173 [Candidate] entering Candidate state
2016/06/23 07:50:52 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:52 [DEBUG] raft: Vote granted from 127.0.0.1:18173. Tally: 1
2016/06/23 07:50:52 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:52 [INFO] raft: Node at 127.0.0.1:18173 [Leader] entering Leader state
2016/06/23 07:50:52 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:52 [INFO] consul: New leader elected: Node 173
2016/06/23 07:50:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:52 [DEBUG] raft: Node 127.0.0.1:18173 updated peer set (2): [127.0.0.1:18173]
2016/06/23 07:50:52 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:53 [INFO] consul: member 'Node 173' joined, marking health alive
2016/06/23 07:50:53 [INFO] agent: requesting shutdown
2016/06/23 07:50:53 [INFO] consul: shutting down server
2016/06/23 07:50:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:53 [INFO] agent: Synced service 'consul'
2016/06/23 07:50:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:53 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:50:53 [ERR] agent: failed to sync changes: leadership lost while committing log
2016/06/23 07:50:53 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_Check_DeferSync (3.07s)
	local_test.go:690: checks: &{Node 173 web web passing     {0 0}}
=== RUN   TestAgentAntiEntropy_NodeInfo
2016/06/23 07:50:54 [INFO] raft: Node at 127.0.0.1:18174 [Follower] entering Follower state
2016/06/23 07:50:54 [INFO] serf: EventMemberJoin: Node 174 127.0.0.1
2016/06/23 07:50:54 [INFO] consul: adding LAN server Node 174 (Addr: 127.0.0.1:18174) (DC: dc1)
2016/06/23 07:50:54 [INFO] serf: EventMemberJoin: Node 174.dc1 127.0.0.1
2016/06/23 07:50:54 [INFO] consul: adding WAN server Node 174.dc1 (Addr: 127.0.0.1:18174) (DC: dc1)
2016/06/23 07:50:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:54 [INFO] raft: Node at 127.0.0.1:18174 [Candidate] entering Candidate state
2016/06/23 07:50:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:55 [DEBUG] raft: Vote granted from 127.0.0.1:18174. Tally: 1
2016/06/23 07:50:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:55 [INFO] raft: Node at 127.0.0.1:18174 [Leader] entering Leader state
2016/06/23 07:50:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:55 [INFO] consul: New leader elected: Node 174
2016/06/23 07:50:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:55 [DEBUG] raft: Node 127.0.0.1:18174 updated peer set (2): [127.0.0.1:18174]
2016/06/23 07:50:55 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:50:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:38808
2016/06/23 07:50:55 [INFO] consul: member 'Node 174' joined, marking health alive
2016/06/23 07:50:56 [INFO] agent: requesting shutdown
2016/06/23 07:50:56 [INFO] consul: shutting down server
2016/06/23 07:50:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:56 [INFO] agent: Synced service 'consul'
2016/06/23 07:50:56 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:56 [INFO] agent: shutdown complete
--- FAIL: TestAgentAntiEntropy_NodeInfo (2.59s)
	local_test.go:771: bad: map[]
=== RUN   TestAgentAntiEntropy_deleteService_fails
--- PASS: TestAgentAntiEntropy_deleteService_fails (0.00s)
=== RUN   TestAgentAntiEntropy_deleteCheck_fails
--- PASS: TestAgentAntiEntropy_deleteCheck_fails (0.00s)
=== RUN   TestAgent_serviceTokens
--- PASS: TestAgent_serviceTokens (0.00s)
=== RUN   TestAgent_checkTokens
--- PASS: TestAgent_checkTokens (0.00s)
=== RUN   TestAgent_nestedPauseResume
--- PASS: TestAgent_nestedPauseResume (0.00s)
=== RUN   TestAgent_sendCoordinate
2016/06/23 07:50:57 [INFO] raft: Node at 127.0.0.1:18177 [Follower] entering Follower state
2016/06/23 07:50:57 [INFO] serf: EventMemberJoin: Node 177 127.0.0.1
2016/06/23 07:50:57 [INFO] serf: EventMemberJoin: Node 177.dc1 127.0.0.1
2016/06/23 07:50:57 [INFO] consul: adding LAN server Node 177 (Addr: 127.0.0.1:18177) (DC: dc1)
2016/06/23 07:50:57 [INFO] consul: adding WAN server Node 177.dc1 (Addr: 127.0.0.1:18177) (DC: dc1)
2016/06/23 07:50:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:57 [INFO] raft: Node at 127.0.0.1:18177 [Candidate] entering Candidate state
2016/06/23 07:50:57 [ERR] agent: coordinate update error: No cluster leader
2016/06/23 07:50:57 [ERR] agent: coordinate update error: No cluster leader
2016/06/23 07:50:57 [ERR] agent: coordinate update error: No cluster leader
2016/06/23 07:50:57 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:57 [DEBUG] raft: Vote granted from 127.0.0.1:18177. Tally: 1
2016/06/23 07:50:57 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:57 [INFO] raft: Node at 127.0.0.1:18177 [Leader] entering Leader state
2016/06/23 07:50:57 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:57 [INFO] consul: New leader elected: Node 177
2016/06/23 07:50:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:57 [DEBUG] raft: Node 127.0.0.1:18177 updated peer set (2): [127.0.0.1:18177]
2016/06/23 07:50:58 [DEBUG] consul: reset tombstone GC to index 3
2016/06/23 07:50:58 [INFO] consul: member 'Node 177' joined, marking health alive
2016/06/23 07:50:59 [INFO] agent: requesting shutdown
2016/06/23 07:50:59 [INFO] consul: shutting down server
2016/06/23 07:50:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:59 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:59 [INFO] agent: shutdown complete
--- PASS: TestAgent_sendCoordinate (3.02s)
=== RUN   TestLogWriter
--- PASS: TestLogWriter (0.00s)
=== RUN   TestPreparedQuery_Create
2016/06/23 07:51:00 [INFO] raft: Node at 127.0.0.1:18178 [Follower] entering Follower state
2016/06/23 07:51:00 [INFO] serf: EventMemberJoin: Node 178 127.0.0.1
2016/06/23 07:51:00 [INFO] consul: adding LAN server Node 178 (Addr: 127.0.0.1:18178) (DC: dc1)
2016/06/23 07:51:00 [INFO] serf: EventMemberJoin: Node 178.dc1 127.0.0.1
2016/06/23 07:51:00 [INFO] consul: adding WAN server Node 178.dc1 (Addr: 127.0.0.1:18178) (DC: dc1)
2016/06/23 07:51:00 [DEBUG] memberlist: Potential blocking operation. Last command took 21.738666ms
2016/06/23 07:51:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:00 [INFO] raft: Node at 127.0.0.1:18178 [Candidate] entering Candidate state
2016/06/23 07:51:01 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:01 [DEBUG] raft: Vote granted from 127.0.0.1:18178. Tally: 1
2016/06/23 07:51:01 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:01 [INFO] raft: Node at 127.0.0.1:18178 [Leader] entering Leader state
2016/06/23 07:51:01 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:01 [INFO] consul: New leader elected: Node 178
2016/06/23 07:51:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:01 [DEBUG] raft: Node 127.0.0.1:18178 updated peer set (2): [127.0.0.1:18178]
2016/06/23 07:51:01 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:02 [INFO] consul: member 'Node 178' joined, marking health alive
2016/06/23 07:51:02 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:02 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:02 [INFO] agent: requesting shutdown
2016/06/23 07:51:02 [INFO] consul: shutting down server
2016/06/23 07:51:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:02 [INFO] agent: shutdown complete
2016/06/23 07:51:02 [DEBUG] http: Shutting down http server (127.0.0.1:18978)
--- PASS: TestPreparedQuery_Create (3.39s)
=== RUN   TestPreparedQuery_List
2016/06/23 07:51:03 [INFO] raft: Node at 127.0.0.1:18179 [Follower] entering Follower state
2016/06/23 07:51:03 [INFO] serf: EventMemberJoin: Node 179 127.0.0.1
2016/06/23 07:51:03 [INFO] serf: EventMemberJoin: Node 179.dc1 127.0.0.1
2016/06/23 07:51:03 [INFO] consul: adding WAN server Node 179.dc1 (Addr: 127.0.0.1:18179) (DC: dc1)
2016/06/23 07:51:03 [INFO] consul: adding LAN server Node 179 (Addr: 127.0.0.1:18179) (DC: dc1)
2016/06/23 07:51:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:03 [INFO] raft: Node at 127.0.0.1:18179 [Candidate] entering Candidate state
2016/06/23 07:51:04 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:04 [DEBUG] raft: Vote granted from 127.0.0.1:18179. Tally: 1
2016/06/23 07:51:04 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:04 [INFO] raft: Node at 127.0.0.1:18179 [Leader] entering Leader state
2016/06/23 07:51:04 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:04 [INFO] consul: New leader elected: Node 179
2016/06/23 07:51:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:04 [DEBUG] raft: Node 127.0.0.1:18179 updated peer set (2): [127.0.0.1:18179]
2016/06/23 07:51:04 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:04 [INFO] consul: member 'Node 179' joined, marking health alive
2016/06/23 07:51:05 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:05 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:05 [INFO] agent: requesting shutdown
2016/06/23 07:51:05 [INFO] consul: shutting down server
2016/06/23 07:51:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:05 [INFO] agent: shutdown complete
2016/06/23 07:51:05 [DEBUG] http: Shutting down http server (127.0.0.1:18979)
2016/06/23 07:51:06 [INFO] raft: Node at 127.0.0.1:18180 [Follower] entering Follower state
2016/06/23 07:51:06 [INFO] serf: EventMemberJoin: Node 180 127.0.0.1
2016/06/23 07:51:06 [INFO] consul: adding LAN server Node 180 (Addr: 127.0.0.1:18180) (DC: dc1)
2016/06/23 07:51:06 [INFO] serf: EventMemberJoin: Node 180.dc1 127.0.0.1
2016/06/23 07:51:06 [INFO] consul: adding WAN server Node 180.dc1 (Addr: 127.0.0.1:18180) (DC: dc1)
2016/06/23 07:51:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:06 [INFO] raft: Node at 127.0.0.1:18180 [Candidate] entering Candidate state
2016/06/23 07:51:06 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:06 [DEBUG] raft: Vote granted from 127.0.0.1:18180. Tally: 1
2016/06/23 07:51:06 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:06 [INFO] raft: Node at 127.0.0.1:18180 [Leader] entering Leader state
2016/06/23 07:51:06 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:06 [INFO] consul: New leader elected: Node 180
2016/06/23 07:51:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:07 [DEBUG] raft: Node 127.0.0.1:18180 updated peer set (2): [127.0.0.1:18180]
2016/06/23 07:51:07 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:51:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:51168
2016/06/23 07:51:08 [INFO] consul: member 'Node 180' joined, marking health alive
2016/06/23 07:51:08 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:08 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:08 [INFO] agent: requesting shutdown
2016/06/23 07:51:08 [INFO] consul: shutting down server
2016/06/23 07:51:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:08 [INFO] agent: shutdown complete
2016/06/23 07:51:08 [DEBUG] http: Shutting down http server (127.0.0.1:18980)
--- PASS: TestPreparedQuery_List (5.58s)
=== RUN   TestPreparedQuery_Execute
2016/06/23 07:51:09 [INFO] serf: EventMemberJoin: Node 181 127.0.0.1
2016/06/23 07:51:09 [INFO] raft: Node at 127.0.0.1:18181 [Follower] entering Follower state
2016/06/23 07:51:09 [INFO] consul: adding LAN server Node 181 (Addr: 127.0.0.1:18181) (DC: dc1)
2016/06/23 07:51:09 [INFO] serf: EventMemberJoin: Node 181.dc1 127.0.0.1
2016/06/23 07:51:09 [INFO] consul: adding WAN server Node 181.dc1 (Addr: 127.0.0.1:18181) (DC: dc1)
2016/06/23 07:51:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:09 [INFO] raft: Node at 127.0.0.1:18181 [Candidate] entering Candidate state
2016/06/23 07:51:09 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:09 [DEBUG] raft: Vote granted from 127.0.0.1:18181. Tally: 1
2016/06/23 07:51:09 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:09 [INFO] raft: Node at 127.0.0.1:18181 [Leader] entering Leader state
2016/06/23 07:51:09 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:09 [INFO] consul: New leader elected: Node 181
2016/06/23 07:51:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:10 [DEBUG] raft: Node 127.0.0.1:18181 updated peer set (2): [127.0.0.1:18181]
2016/06/23 07:51:10 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:10 [INFO] consul: member 'Node 181' joined, marking health alive
2016/06/23 07:51:11 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:11 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:11 [INFO] agent: requesting shutdown
2016/06/23 07:51:11 [INFO] consul: shutting down server
2016/06/23 07:51:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:11 [INFO] agent: shutdown complete
2016/06/23 07:51:11 [DEBUG] http: Shutting down http server (127.0.0.1:18981)
2016/06/23 07:51:12 [INFO] raft: Node at 127.0.0.1:18182 [Follower] entering Follower state
2016/06/23 07:51:12 [INFO] serf: EventMemberJoin: Node 182 127.0.0.1
2016/06/23 07:51:12 [INFO] consul: adding LAN server Node 182 (Addr: 127.0.0.1:18182) (DC: dc1)
2016/06/23 07:51:12 [INFO] serf: EventMemberJoin: Node 182.dc1 127.0.0.1
2016/06/23 07:51:12 [INFO] consul: adding WAN server Node 182.dc1 (Addr: 127.0.0.1:18182) (DC: dc1)
2016/06/23 07:51:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:12 [INFO] raft: Node at 127.0.0.1:18182 [Candidate] entering Candidate state
2016/06/23 07:51:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:13 [DEBUG] raft: Vote granted from 127.0.0.1:18182. Tally: 1
2016/06/23 07:51:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:13 [INFO] raft: Node at 127.0.0.1:18182 [Leader] entering Leader state
2016/06/23 07:51:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:13 [INFO] consul: New leader elected: Node 182
2016/06/23 07:51:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:13 [DEBUG] raft: Node 127.0.0.1:18182 updated peer set (2): [127.0.0.1:18182]
2016/06/23 07:51:13 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:14 [INFO] consul: member 'Node 182' joined, marking health alive
2016/06/23 07:51:14 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:14 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:14 [INFO] agent: requesting shutdown
2016/06/23 07:51:14 [INFO] consul: shutting down server
2016/06/23 07:51:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:14 [INFO] agent: shutdown complete
2016/06/23 07:51:14 [DEBUG] http: Shutting down http server (127.0.0.1:18982)
2016/06/23 07:51:15 [INFO] raft: Node at 127.0.0.1:18183 [Follower] entering Follower state
2016/06/23 07:51:15 [INFO] serf: EventMemberJoin: Node 183 127.0.0.1
2016/06/23 07:51:15 [INFO] consul: adding LAN server Node 183 (Addr: 127.0.0.1:18183) (DC: dc1)
2016/06/23 07:51:15 [INFO] serf: EventMemberJoin: Node 183.dc1 127.0.0.1
2016/06/23 07:51:15 [INFO] consul: adding WAN server Node 183.dc1 (Addr: 127.0.0.1:18183) (DC: dc1)
2016/06/23 07:51:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:15 [INFO] raft: Node at 127.0.0.1:18183 [Candidate] entering Candidate state
2016/06/23 07:51:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:16 [DEBUG] raft: Vote granted from 127.0.0.1:18183. Tally: 1
2016/06/23 07:51:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:16 [INFO] raft: Node at 127.0.0.1:18183 [Leader] entering Leader state
2016/06/23 07:51:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:16 [INFO] consul: New leader elected: Node 183
2016/06/23 07:51:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:16 [DEBUG] raft: Node 127.0.0.1:18183 updated peer set (2): [127.0.0.1:18183]
2016/06/23 07:51:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:17 [INFO] consul: member 'Node 183' joined, marking health alive
2016/06/23 07:51:17 [INFO] agent: requesting shutdown
2016/06/23 07:51:17 [INFO] consul: shutting down server
2016/06/23 07:51:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:17 [INFO] agent: shutdown complete
2016/06/23 07:51:17 [DEBUG] http: Shutting down http server (127.0.0.1:18983)
--- PASS: TestPreparedQuery_Execute (9.01s)
=== RUN   TestPreparedQuery_Explain
2016/06/23 07:51:17 [INFO] raft: Node at 127.0.0.1:18184 [Follower] entering Follower state
2016/06/23 07:51:17 [INFO] serf: EventMemberJoin: Node 184 127.0.0.1
2016/06/23 07:51:17 [INFO] consul: adding LAN server Node 184 (Addr: 127.0.0.1:18184) (DC: dc1)
2016/06/23 07:51:17 [INFO] serf: EventMemberJoin: Node 184.dc1 127.0.0.1
2016/06/23 07:51:17 [INFO] consul: adding WAN server Node 184.dc1 (Addr: 127.0.0.1:18184) (DC: dc1)
2016/06/23 07:51:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:18 [INFO] raft: Node at 127.0.0.1:18184 [Candidate] entering Candidate state
2016/06/23 07:51:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:18 [DEBUG] raft: Vote granted from 127.0.0.1:18184. Tally: 1
2016/06/23 07:51:18 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:18 [INFO] raft: Node at 127.0.0.1:18184 [Leader] entering Leader state
2016/06/23 07:51:18 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:18 [INFO] consul: New leader elected: Node 184
2016/06/23 07:51:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:18 [DEBUG] raft: Node 127.0.0.1:18184 updated peer set (2): [127.0.0.1:18184]
2016/06/23 07:51:18 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:19 [INFO] consul: member 'Node 184' joined, marking health alive
2016/06/23 07:51:19 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:19 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:19 [INFO] agent: requesting shutdown
2016/06/23 07:51:19 [INFO] consul: shutting down server
2016/06/23 07:51:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:19 [INFO] agent: shutdown complete
2016/06/23 07:51:19 [DEBUG] http: Shutting down http server (127.0.0.1:18984)
2016/06/23 07:51:20 [INFO] serf: EventMemberJoin: Node 185 127.0.0.1
2016/06/23 07:51:20 [INFO] serf: EventMemberJoin: Node 185.dc1 127.0.0.1
2016/06/23 07:51:20 [INFO] raft: Node at 127.0.0.1:18185 [Follower] entering Follower state
2016/06/23 07:51:20 [INFO] consul: adding LAN server Node 185 (Addr: 127.0.0.1:18185) (DC: dc1)
2016/06/23 07:51:20 [INFO] consul: adding WAN server Node 185.dc1 (Addr: 127.0.0.1:18185) (DC: dc1)
2016/06/23 07:51:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:20 [INFO] raft: Node at 127.0.0.1:18185 [Candidate] entering Candidate state
2016/06/23 07:51:21 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:21 [DEBUG] raft: Vote granted from 127.0.0.1:18185. Tally: 1
2016/06/23 07:51:21 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:21 [INFO] raft: Node at 127.0.0.1:18185 [Leader] entering Leader state
2016/06/23 07:51:21 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:21 [INFO] consul: New leader elected: Node 185
2016/06/23 07:51:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:21 [DEBUG] raft: Node 127.0.0.1:18185 updated peer set (2): [127.0.0.1:18185]
2016/06/23 07:51:21 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:21 [INFO] consul: member 'Node 185' joined, marking health alive
2016/06/23 07:51:21 [INFO] agent: requesting shutdown
2016/06/23 07:51:21 [INFO] consul: shutting down server
2016/06/23 07:51:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:22 [INFO] agent: shutdown complete
2016/06/23 07:51:22 [DEBUG] http: Shutting down http server (127.0.0.1:18985)
--- PASS: TestPreparedQuery_Explain (4.54s)
=== RUN   TestPreparedQuery_Get
2016/06/23 07:51:22 [INFO] raft: Node at 127.0.0.1:18186 [Follower] entering Follower state
2016/06/23 07:51:22 [INFO] serf: EventMemberJoin: Node 186 127.0.0.1
2016/06/23 07:51:22 [INFO] serf: EventMemberJoin: Node 186.dc1 127.0.0.1
2016/06/23 07:51:22 [INFO] consul: adding LAN server Node 186 (Addr: 127.0.0.1:18186) (DC: dc1)
2016/06/23 07:51:22 [INFO] consul: adding WAN server Node 186.dc1 (Addr: 127.0.0.1:18186) (DC: dc1)
2016/06/23 07:51:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:22 [INFO] raft: Node at 127.0.0.1:18186 [Candidate] entering Candidate state
2016/06/23 07:51:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:23 [DEBUG] raft: Vote granted from 127.0.0.1:18186. Tally: 1
2016/06/23 07:51:23 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:23 [INFO] raft: Node at 127.0.0.1:18186 [Leader] entering Leader state
2016/06/23 07:51:23 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:23 [INFO] consul: New leader elected: Node 186
2016/06/23 07:51:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:23 [DEBUG] raft: Node 127.0.0.1:18186 updated peer set (2): [127.0.0.1:18186]
2016/06/23 07:51:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:24 [INFO] consul: member 'Node 186' joined, marking health alive
2016/06/23 07:51:24 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:24 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:24 [INFO] agent: requesting shutdown
2016/06/23 07:51:24 [INFO] consul: shutting down server
2016/06/23 07:51:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:24 [INFO] agent: shutdown complete
2016/06/23 07:51:24 [DEBUG] http: Shutting down http server (127.0.0.1:18986)
2016/06/23 07:51:25 [INFO] serf: EventMemberJoin: Node 187 127.0.0.1
2016/06/23 07:51:25 [INFO] raft: Node at 127.0.0.1:18187 [Follower] entering Follower state
2016/06/23 07:51:25 [INFO] consul: adding LAN server Node 187 (Addr: 127.0.0.1:18187) (DC: dc1)
2016/06/23 07:51:25 [INFO] serf: EventMemberJoin: Node 187.dc1 127.0.0.1
2016/06/23 07:51:25 [INFO] consul: adding WAN server Node 187.dc1 (Addr: 127.0.0.1:18187) (DC: dc1)
2016/06/23 07:51:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:25 [INFO] raft: Node at 127.0.0.1:18187 [Candidate] entering Candidate state
2016/06/23 07:51:25 [DEBUG] memberlist: Potential blocking operation. Last command took 60.77086ms
2016/06/23 07:51:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:25 [DEBUG] raft: Vote granted from 127.0.0.1:18187. Tally: 1
2016/06/23 07:51:25 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:25 [INFO] raft: Node at 127.0.0.1:18187 [Leader] entering Leader state
2016/06/23 07:51:25 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:25 [INFO] consul: New leader elected: Node 187
2016/06/23 07:51:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:25 [DEBUG] raft: Node 127.0.0.1:18187 updated peer set (2): [127.0.0.1:18187]
2016/06/23 07:51:26 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:26 [INFO] consul: member 'Node 187' joined, marking health alive
2016/06/23 07:51:27 [INFO] agent: requesting shutdown
2016/06/23 07:51:27 [INFO] consul: shutting down server
2016/06/23 07:51:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:27 [INFO] agent: shutdown complete
2016/06/23 07:51:27 [DEBUG] http: Shutting down http server (127.0.0.1:18987)
--- PASS: TestPreparedQuery_Get (5.22s)
=== RUN   TestPreparedQuery_Update
2016/06/23 07:51:28 [INFO] serf: EventMemberJoin: Node 188 127.0.0.1
2016/06/23 07:51:28 [INFO] serf: EventMemberJoin: Node 188.dc1 127.0.0.1
2016/06/23 07:51:28 [INFO] raft: Node at 127.0.0.1:18188 [Follower] entering Follower state
2016/06/23 07:51:28 [INFO] consul: adding LAN server Node 188 (Addr: 127.0.0.1:18188) (DC: dc1)
2016/06/23 07:51:28 [INFO] consul: adding WAN server Node 188.dc1 (Addr: 127.0.0.1:18188) (DC: dc1)
2016/06/23 07:51:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:28 [INFO] raft: Node at 127.0.0.1:18188 [Candidate] entering Candidate state
2016/06/23 07:51:28 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:28 [DEBUG] raft: Vote granted from 127.0.0.1:18188. Tally: 1
2016/06/23 07:51:28 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:28 [INFO] raft: Node at 127.0.0.1:18188 [Leader] entering Leader state
2016/06/23 07:51:28 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:28 [INFO] consul: New leader elected: Node 188
2016/06/23 07:51:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:28 [DEBUG] raft: Node 127.0.0.1:18188 updated peer set (2): [127.0.0.1:18188]
2016/06/23 07:51:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:30 [INFO] consul: member 'Node 188' joined, marking health alive
2016/06/23 07:51:30 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:30 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:30 [INFO] agent: requesting shutdown
2016/06/23 07:51:30 [INFO] consul: shutting down server
2016/06/23 07:51:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:30 [INFO] agent: shutdown complete
2016/06/23 07:51:30 [DEBUG] http: Shutting down http server (127.0.0.1:18988)
--- PASS: TestPreparedQuery_Update (3.38s)
=== RUN   TestPreparedQuery_Delete
2016/06/23 07:51:31 [INFO] raft: Node at 127.0.0.1:18189 [Follower] entering Follower state
2016/06/23 07:51:31 [INFO] serf: EventMemberJoin: Node 189 127.0.0.1
2016/06/23 07:51:31 [INFO] consul: adding LAN server Node 189 (Addr: 127.0.0.1:18189) (DC: dc1)
2016/06/23 07:51:31 [INFO] serf: EventMemberJoin: Node 189.dc1 127.0.0.1
2016/06/23 07:51:31 [INFO] consul: adding WAN server Node 189.dc1 (Addr: 127.0.0.1:18189) (DC: dc1)
2016/06/23 07:51:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:31 [INFO] raft: Node at 127.0.0.1:18189 [Candidate] entering Candidate state
2016/06/23 07:51:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:32 [DEBUG] raft: Vote granted from 127.0.0.1:18189. Tally: 1
2016/06/23 07:51:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:32 [INFO] raft: Node at 127.0.0.1:18189 [Leader] entering Leader state
2016/06/23 07:51:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:32 [INFO] consul: New leader elected: Node 189
2016/06/23 07:51:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:32 [DEBUG] raft: Node 127.0.0.1:18189 updated peer set (2): [127.0.0.1:18189]
2016/06/23 07:51:32 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:32 [INFO] consul: member 'Node 189' joined, marking health alive
2016/06/23 07:51:33 [WARN] consul: endpoint injected; this should only be used for testing
2016/06/23 07:51:33 [WARN] agent: endpoint injected; this should only be used for testing
2016/06/23 07:51:33 [INFO] agent: requesting shutdown
2016/06/23 07:51:33 [INFO] consul: shutting down server
2016/06/23 07:51:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:33 [INFO] agent: shutdown complete
2016/06/23 07:51:33 [DEBUG] http: Shutting down http server (127.0.0.1:18989)
--- PASS: TestPreparedQuery_Delete (2.81s)
=== RUN   TestPreparedQuery_BadMethods
2016/06/23 07:51:34 [INFO] serf: EventMemberJoin: Node 190 127.0.0.1
2016/06/23 07:51:34 [INFO] serf: EventMemberJoin: Node 190.dc1 127.0.0.1
2016/06/23 07:51:34 [INFO] raft: Node at 127.0.0.1:18190 [Follower] entering Follower state
2016/06/23 07:51:34 [INFO] consul: adding LAN server Node 190 (Addr: 127.0.0.1:18190) (DC: dc1)
2016/06/23 07:51:34 [INFO] consul: adding WAN server Node 190.dc1 (Addr: 127.0.0.1:18190) (DC: dc1)
2016/06/23 07:51:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:34 [INFO] raft: Node at 127.0.0.1:18190 [Candidate] entering Candidate state
2016/06/23 07:51:34 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:34 [DEBUG] raft: Vote granted from 127.0.0.1:18190. Tally: 1
2016/06/23 07:51:34 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:34 [INFO] raft: Node at 127.0.0.1:18190 [Leader] entering Leader state
2016/06/23 07:51:34 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:34 [INFO] consul: New leader elected: Node 190
2016/06/23 07:51:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:34 [DEBUG] raft: Node 127.0.0.1:18190 updated peer set (2): [127.0.0.1:18190]
2016/06/23 07:51:34 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:35 [INFO] consul: member 'Node 190' joined, marking health alive
2016/06/23 07:51:35 [INFO] agent: requesting shutdown
2016/06/23 07:51:35 [INFO] consul: shutting down server
2016/06/23 07:51:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:35 [INFO] agent: shutdown complete
2016/06/23 07:51:35 [DEBUG] http: Shutting down http server (127.0.0.1:18990)
2016/06/23 07:51:36 [INFO] raft: Node at 127.0.0.1:18191 [Follower] entering Follower state
2016/06/23 07:51:36 [INFO] serf: EventMemberJoin: Node 191 127.0.0.1
2016/06/23 07:51:36 [INFO] consul: adding LAN server Node 191 (Addr: 127.0.0.1:18191) (DC: dc1)
2016/06/23 07:51:36 [INFO] serf: EventMemberJoin: Node 191.dc1 127.0.0.1
2016/06/23 07:51:36 [INFO] consul: adding WAN server Node 191.dc1 (Addr: 127.0.0.1:18191) (DC: dc1)
2016/06/23 07:51:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:36 [INFO] raft: Node at 127.0.0.1:18191 [Candidate] entering Candidate state
2016/06/23 07:51:37 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:37 [DEBUG] raft: Vote granted from 127.0.0.1:18191. Tally: 1
2016/06/23 07:51:37 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:37 [INFO] raft: Node at 127.0.0.1:18191 [Leader] entering Leader state
2016/06/23 07:51:37 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:37 [INFO] consul: New leader elected: Node 191
2016/06/23 07:51:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:37 [DEBUG] raft: Node 127.0.0.1:18191 updated peer set (2): [127.0.0.1:18191]
2016/06/23 07:51:37 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:38 [INFO] consul: member 'Node 191' joined, marking health alive
2016/06/23 07:51:38 [INFO] agent: requesting shutdown
2016/06/23 07:51:38 [INFO] consul: shutting down server
2016/06/23 07:51:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:38 [INFO] agent: shutdown complete
2016/06/23 07:51:38 [DEBUG] http: Shutting down http server (127.0.0.1:18991)
--- PASS: TestPreparedQuery_BadMethods (5.00s)
=== RUN   TestPreparedQuery_parseLimit
--- PASS: TestPreparedQuery_parseLimit (0.00s)
=== RUN   TestPreparedQuery_Integration
2016/06/23 07:51:39 [INFO] raft: Node at 127.0.0.1:18192 [Follower] entering Follower state
2016/06/23 07:51:39 [INFO] serf: EventMemberJoin: Node 192 127.0.0.1
2016/06/23 07:51:39 [INFO] serf: EventMemberJoin: Node 192.dc1 127.0.0.1
2016/06/23 07:51:39 [INFO] consul: adding LAN server Node 192 (Addr: 127.0.0.1:18192) (DC: dc1)
2016/06/23 07:51:39 [INFO] consul: adding WAN server Node 192.dc1 (Addr: 127.0.0.1:18192) (DC: dc1)
2016/06/23 07:51:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:39 [INFO] raft: Node at 127.0.0.1:18192 [Candidate] entering Candidate state
2016/06/23 07:51:39 [DEBUG] memberlist: Potential blocking operation. Last command took 75.931991ms
2016/06/23 07:51:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:39 [DEBUG] raft: Vote granted from 127.0.0.1:18192. Tally: 1
2016/06/23 07:51:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:39 [INFO] raft: Node at 127.0.0.1:18192 [Leader] entering Leader state
2016/06/23 07:51:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:39 [INFO] consul: New leader elected: Node 192
2016/06/23 07:51:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:40 [DEBUG] raft: Node 127.0.0.1:18192 updated peer set (2): [127.0.0.1:18192]
2016/06/23 07:51:40 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:40 [INFO] consul: member 'Node 192' joined, marking health alive
2016/06/23 07:51:42 [DEBUG] memberlist: Potential blocking operation. Last command took 56.491729ms
2016/06/23 07:51:42 [INFO] agent: requesting shutdown
2016/06/23 07:51:42 [INFO] consul: shutting down server
2016/06/23 07:51:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:43 [INFO] agent: shutdown complete
2016/06/23 07:51:43 [DEBUG] http: Shutting down http server (127.0.0.1:18992)
--- PASS: TestPreparedQuery_Integration (4.67s)
=== RUN   TestRexecWriter
--- PASS: TestRexecWriter (0.03s)
=== RUN   TestRemoteExecGetSpec
2016/06/23 07:51:44 [INFO] raft: Node at 127.0.0.1:18193 [Follower] entering Follower state
2016/06/23 07:51:44 [INFO] serf: EventMemberJoin: Node 193 127.0.0.1
2016/06/23 07:51:44 [INFO] consul: adding LAN server Node 193 (Addr: 127.0.0.1:18193) (DC: dc1)
2016/06/23 07:51:44 [INFO] serf: EventMemberJoin: Node 193.dc1 127.0.0.1
2016/06/23 07:51:44 [INFO] consul: adding WAN server Node 193.dc1 (Addr: 127.0.0.1:18193) (DC: dc1)
2016/06/23 07:51:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:44 [INFO] raft: Node at 127.0.0.1:18193 [Candidate] entering Candidate state
2016/06/23 07:51:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:45 [DEBUG] raft: Vote granted from 127.0.0.1:18193. Tally: 1
2016/06/23 07:51:45 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:45 [INFO] raft: Node at 127.0.0.1:18193 [Leader] entering Leader state
2016/06/23 07:51:45 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:45 [INFO] consul: New leader elected: Node 193
2016/06/23 07:51:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:45 [DEBUG] raft: Node 127.0.0.1:18193 updated peer set (2): [127.0.0.1:18193]
2016/06/23 07:51:45 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:46 [INFO] consul: member 'Node 193' joined, marking health alive
2016/06/23 07:51:47 [INFO] agent: requesting shutdown
2016/06/23 07:51:47 [INFO] consul: shutting down server
2016/06/23 07:51:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:47 [INFO] agent: shutdown complete
--- PASS: TestRemoteExecGetSpec (4.30s)
=== RUN   TestRemoteExecGetSpec_ACLToken
2016/06/23 07:51:48 [INFO] raft: Node at 127.0.0.1:18194 [Follower] entering Follower state
2016/06/23 07:51:48 [INFO] serf: EventMemberJoin: Node 194 127.0.0.1
2016/06/23 07:51:48 [INFO] consul: adding LAN server Node 194 (Addr: 127.0.0.1:18194) (DC: dc1)
2016/06/23 07:51:48 [INFO] serf: EventMemberJoin: Node 194.dc1 127.0.0.1
2016/06/23 07:51:48 [INFO] consul: adding WAN server Node 194.dc1 (Addr: 127.0.0.1:18194) (DC: dc1)
2016/06/23 07:51:48 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:48 [INFO] raft: Node at 127.0.0.1:18194 [Candidate] entering Candidate state
2016/06/23 07:51:48 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:48 [DEBUG] raft: Vote granted from 127.0.0.1:18194. Tally: 1
2016/06/23 07:51:48 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:48 [INFO] raft: Node at 127.0.0.1:18194 [Leader] entering Leader state
2016/06/23 07:51:48 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:48 [INFO] consul: New leader elected: Node 194
2016/06/23 07:51:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:48 [DEBUG] raft: Node 127.0.0.1:18194 updated peer set (2): [127.0.0.1:18194]
2016/06/23 07:51:49 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:49 [INFO] consul: member 'Node 194' joined, marking health alive
2016/06/23 07:51:51 [INFO] agent: requesting shutdown
2016/06/23 07:51:51 [INFO] consul: shutting down server
2016/06/23 07:51:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:51 [INFO] agent: shutdown complete
--- PASS: TestRemoteExecGetSpec_ACLToken (3.81s)
=== RUN   TestRemoteExecWrites
2016/06/23 07:51:51 [INFO] raft: Node at 127.0.0.1:18195 [Follower] entering Follower state
2016/06/23 07:51:51 [INFO] serf: EventMemberJoin: Node 195 127.0.0.1
2016/06/23 07:51:51 [INFO] consul: adding LAN server Node 195 (Addr: 127.0.0.1:18195) (DC: dc1)
2016/06/23 07:51:52 [INFO] serf: EventMemberJoin: Node 195.dc1 127.0.0.1
2016/06/23 07:51:52 [INFO] consul: adding WAN server Node 195.dc1 (Addr: 127.0.0.1:18195) (DC: dc1)
2016/06/23 07:51:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:52 [INFO] raft: Node at 127.0.0.1:18195 [Candidate] entering Candidate state
2016/06/23 07:51:52 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:52 [DEBUG] raft: Vote granted from 127.0.0.1:18195. Tally: 1
2016/06/23 07:51:52 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:52 [INFO] raft: Node at 127.0.0.1:18195 [Leader] entering Leader state
2016/06/23 07:51:52 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:52 [INFO] consul: New leader elected: Node 195
2016/06/23 07:51:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:52 [DEBUG] raft: Node 127.0.0.1:18195 updated peer set (2): [127.0.0.1:18195]
2016/06/23 07:51:53 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:53 [INFO] consul: member 'Node 195' joined, marking health alive
2016/06/23 07:51:53 [DEBUG] memberlist: Potential blocking operation. Last command took 35.876098ms
2016/06/23 07:51:55 [INFO] agent: requesting shutdown
2016/06/23 07:51:55 [INFO] consul: shutting down server
2016/06/23 07:51:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:51:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:38856
2016/06/23 07:51:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:55 [INFO] agent: shutdown complete
--- PASS: TestRemoteExecWrites (4.25s)
=== RUN   TestRemoteExecWrites_ACLToken
2016/06/23 07:51:56 [INFO] serf: EventMemberJoin: Node 196 127.0.0.1
2016/06/23 07:51:56 [INFO] raft: Node at 127.0.0.1:18196 [Follower] entering Follower state
2016/06/23 07:51:56 [INFO] consul: adding LAN server Node 196 (Addr: 127.0.0.1:18196) (DC: dc1)
2016/06/23 07:51:56 [INFO] serf: EventMemberJoin: Node 196.dc1 127.0.0.1
2016/06/23 07:51:56 [INFO] consul: adding WAN server Node 196.dc1 (Addr: 127.0.0.1:18196) (DC: dc1)
2016/06/23 07:51:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:56 [INFO] raft: Node at 127.0.0.1:18196 [Candidate] entering Candidate state
2016/06/23 07:51:57 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:57 [DEBUG] raft: Vote granted from 127.0.0.1:18196. Tally: 1
2016/06/23 07:51:57 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:57 [INFO] raft: Node at 127.0.0.1:18196 [Leader] entering Leader state
2016/06/23 07:51:57 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:57 [INFO] consul: New leader elected: Node 196
2016/06/23 07:51:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:57 [DEBUG] raft: Node 127.0.0.1:18196 updated peer set (2): [127.0.0.1:18196]
2016/06/23 07:51:58 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:58 [DEBUG] memberlist: Potential blocking operation. Last command took 17.417533ms
2016/06/23 07:51:58 [INFO] consul: member 'Node 196' joined, marking health alive
2016/06/23 07:52:00 [INFO] agent: requesting shutdown
2016/06/23 07:52:00 [INFO] consul: shutting down server
2016/06/23 07:52:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:00 [INFO] agent: shutdown complete
--- PASS: TestRemoteExecWrites_ACLToken (5.15s)
=== RUN   TestHandleRemoteExec
2016/06/23 07:52:01 [INFO] raft: Node at 127.0.0.1:18197 [Follower] entering Follower state
2016/06/23 07:52:01 [INFO] serf: EventMemberJoin: Node 197 127.0.0.1
2016/06/23 07:52:01 [INFO] consul: adding LAN server Node 197 (Addr: 127.0.0.1:18197) (DC: dc1)
2016/06/23 07:52:01 [INFO] serf: EventMemberJoin: Node 197.dc1 127.0.0.1
2016/06/23 07:52:01 [INFO] consul: adding WAN server Node 197.dc1 (Addr: 127.0.0.1:18197) (DC: dc1)
2016/06/23 07:52:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:01 [INFO] raft: Node at 127.0.0.1:18197 [Candidate] entering Candidate state
2016/06/23 07:52:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:02 [DEBUG] raft: Vote granted from 127.0.0.1:18197. Tally: 1
2016/06/23 07:52:02 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:02 [INFO] raft: Node at 127.0.0.1:18197 [Leader] entering Leader state
2016/06/23 07:52:02 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:02 [INFO] consul: New leader elected: Node 197
2016/06/23 07:52:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:02 [DEBUG] raft: Node 127.0.0.1:18197 updated peer set (2): [127.0.0.1:18197]
2016/06/23 07:52:02 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:03 [INFO] consul: member 'Node 197' joined, marking health alive
2016/06/23 07:52:04 [DEBUG] agent: received remote exec event (ID: 6e6e40a9-cefd-acf8-23c8-d68f797fb957)
2016/06/23 07:52:04 [INFO] agent: remote exec 'uptime'
2016/06/23 07:52:04 [INFO] agent: requesting shutdown
2016/06/23 07:52:04 [INFO] consul: shutting down server
2016/06/23 07:52:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:05 [INFO] agent: shutdown complete
--- PASS: TestHandleRemoteExec (4.34s)
=== RUN   TestHandleRemoteExecFailed
2016/06/23 07:52:05 [INFO] serf: EventMemberJoin: Node 198 127.0.0.1
2016/06/23 07:52:05 [INFO] raft: Node at 127.0.0.1:18198 [Follower] entering Follower state
2016/06/23 07:52:05 [INFO] consul: adding LAN server Node 198 (Addr: 127.0.0.1:18198) (DC: dc1)
2016/06/23 07:52:05 [INFO] serf: EventMemberJoin: Node 198.dc1 127.0.0.1
2016/06/23 07:52:05 [INFO] consul: adding WAN server Node 198.dc1 (Addr: 127.0.0.1:18198) (DC: dc1)
2016/06/23 07:52:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:05 [INFO] raft: Node at 127.0.0.1:18198 [Candidate] entering Candidate state
2016/06/23 07:52:06 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:06 [DEBUG] raft: Vote granted from 127.0.0.1:18198. Tally: 1
2016/06/23 07:52:06 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:06 [INFO] raft: Node at 127.0.0.1:18198 [Leader] entering Leader state
2016/06/23 07:52:06 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:06 [INFO] consul: New leader elected: Node 198
2016/06/23 07:52:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:06 [DEBUG] raft: Node 127.0.0.1:18198 updated peer set (2): [127.0.0.1:18198]
2016/06/23 07:52:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:07 [INFO] consul: member 'Node 198' joined, marking health alive
2016/06/23 07:52:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:52:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:51214
2016/06/23 07:52:08 [DEBUG] agent: received remote exec event (ID: 41c3f3c4-8e44-4311-1da5-32be7b52eaf7)
2016/06/23 07:52:08 [INFO] agent: remote exec 'echo failing;exit 2'
2016/06/23 07:52:09 [INFO] agent: requesting shutdown
2016/06/23 07:52:09 [INFO] consul: shutting down server
2016/06/23 07:52:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:09 [INFO] agent: shutdown complete
--- PASS: TestHandleRemoteExecFailed (4.50s)
=== RUN   TestRPCClient_UnixSocket
2016/06/23 07:52:10 [INFO] raft: Node at 127.0.0.1:18199 [Follower] entering Follower state
2016/06/23 07:52:10 [INFO] serf: EventMemberJoin: Node 199 127.0.0.1
2016/06/23 07:52:10 [INFO] consul: adding LAN server Node 199 (Addr: 127.0.0.1:18199) (DC: dc1)
2016/06/23 07:52:10 [INFO] serf: EventMemberJoin: Node 199.dc1 127.0.0.1
2016/06/23 07:52:10 [INFO] agent.rpc: Accepted client: @
2016/06/23 07:52:10 [INFO] consul: adding WAN server Node 199.dc1 (Addr: 127.0.0.1:18199) (DC: dc1)
2016/06/23 07:52:10 [INFO] agent: requesting shutdown
2016/06/23 07:52:10 [INFO] consul: shutting down server
2016/06/23 07:52:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:10 [INFO] raft: Node at 127.0.0.1:18199 [Candidate] entering Candidate state
2016/06/23 07:52:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:11 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:11 [INFO] agent: shutdown complete
--- PASS: TestRPCClient_UnixSocket (1.78s)
=== RUN   TestRPCClientForceLeave
2016/06/23 07:52:11 [INFO] serf: EventMemberJoin: Node 200 127.0.0.1
2016/06/23 07:52:11 [INFO] serf: EventMemberJoin: Node 200.dc1 127.0.0.1
2016/06/23 07:52:11 [INFO] agent.rpc: Accepted client: 127.0.0.1:49356
2016/06/23 07:52:11 [INFO] raft: Node at 127.0.0.1:18200 [Follower] entering Follower state
2016/06/23 07:52:11 [INFO] consul: adding LAN server Node 200 (Addr: 127.0.0.1:18200) (DC: dc1)
2016/06/23 07:52:11 [INFO] consul: adding WAN server Node 200.dc1 (Addr: 127.0.0.1:18200) (DC: dc1)
2016/06/23 07:52:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:18200 [Candidate] entering Candidate state
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:18201 [Follower] entering Follower state
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 201 127.0.0.1
2016/06/23 07:52:12 [INFO] consul: adding LAN server Node 201 (Addr: 127.0.0.1:18201) (DC: dc1)
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 201.dc1 127.0.0.1
2016/06/23 07:52:12 [INFO] consul: adding WAN server Node 201.dc1 (Addr: 127.0.0.1:18201) (DC: dc1)
2016/06/23 07:52:12 [INFO] agent.rpc: Accepted client: 127.0.0.1:41144
2016/06/23 07:52:12 [INFO] agent: (LAN) joining: [127.0.0.1:18401]
2016/06/23 07:52:12 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18401
2016/06/23 07:52:12 [DEBUG] memberlist: TCP connection from=127.0.0.1:43206
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 201 127.0.0.1
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 200 127.0.0.1
2016/06/23 07:52:12 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/06/23 07:52:12 [INFO] agent: requesting shutdown
2016/06/23 07:52:12 [INFO] consul: shutting down server
2016/06/23 07:52:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:12 [INFO] consul: adding LAN server Node 200 (Addr: 127.0.0.1:18200) (DC: dc1)
2016/06/23 07:52:12 [INFO] consul: adding LAN server Node 201 (Addr: 127.0.0.1:18201) (DC: dc1)
2016/06/23 07:52:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:18201 [Candidate] entering Candidate state
2016/06/23 07:52:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:12 [DEBUG] raft: Vote granted from 127.0.0.1:18200. Tally: 1
2016/06/23 07:52:12 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:18200 [Leader] entering Leader state
2016/06/23 07:52:12 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:12 [INFO] consul: New leader elected: Node 200
2016/06/23 07:52:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:12 [DEBUG] raft: Node 127.0.0.1:18200 updated peer set (2): [127.0.0.1:18200]
2016/06/23 07:52:12 [INFO] memberlist: Suspect Node 201 has failed, no acks received
2016/06/23 07:52:12 [INFO] memberlist: Suspect Node 201 has failed, no acks received
2016/06/23 07:52:12 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:13 [INFO] memberlist: Suspect Node 201 has failed, no acks received
2016/06/23 07:52:13 [INFO] memberlist: Marking Node 201 as failed, suspect timeout reached
2016/06/23 07:52:13 [INFO] serf: EventMemberFailed: Node 201 127.0.0.1
2016/06/23 07:52:13 [INFO] consul: removing LAN server Node 201 (Addr: 127.0.0.1:18201) (DC: dc1)
2016/06/23 07:52:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:13 [INFO] agent: shutdown complete
2016/06/23 07:52:13 [INFO] Force leaving node: Node 201
2016/06/23 07:52:13 [INFO] serf: EventMemberLeave (forced): Node 201 127.0.0.1
2016/06/23 07:52:13 [INFO] consul: removing LAN server Node 201 (Addr: 127.0.0.1:18201) (DC: dc1)
2016/06/23 07:52:13 [INFO] agent: requesting shutdown
2016/06/23 07:52:13 [INFO] consul: shutting down server
2016/06/23 07:52:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:13 [INFO] memberlist: Suspect Node 201 has failed, no acks received
2016/06/23 07:52:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:13 [INFO] consul: member 'Node 200' joined, marking health alive
2016/06/23 07:52:13 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:52:13 [ERR] consul: failed to reconcile member: {Node 200 127.0.0.1 18400 map[vsn_max:3 build:a.bc.d: port:18200 bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:52:13 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:52:13 [INFO] agent: shutdown complete
--- PASS: TestRPCClientForceLeave (2.55s)
=== RUN   TestRPCClientJoinLAN
2016/06/23 07:52:14 [INFO] raft: Node at 127.0.0.1:18202 [Follower] entering Follower state
2016/06/23 07:52:14 [INFO] serf: EventMemberJoin: Node 202 127.0.0.1
2016/06/23 07:52:14 [INFO] serf: EventMemberJoin: Node 202.dc1 127.0.0.1
2016/06/23 07:52:14 [INFO] consul: adding LAN server Node 202 (Addr: 127.0.0.1:18202) (DC: dc1)
2016/06/23 07:52:14 [INFO] agent.rpc: Accepted client: 127.0.0.1:53102
2016/06/23 07:52:14 [INFO] consul: adding WAN server Node 202.dc1 (Addr: 127.0.0.1:18202) (DC: dc1)
2016/06/23 07:52:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:14 [INFO] raft: Node at 127.0.0.1:18202 [Candidate] entering Candidate state
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:18203 [Follower] entering Follower state
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 203 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: adding LAN server Node 203 (Addr: 127.0.0.1:18203) (DC: dc1)
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 203.dc1 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: adding WAN server Node 203.dc1 (Addr: 127.0.0.1:18203) (DC: dc1)
2016/06/23 07:52:15 [INFO] agent.rpc: Accepted client: 127.0.0.1:38348
2016/06/23 07:52:15 [INFO] agent: (LAN) joining: [127.0.0.1:18403]
2016/06/23 07:52:15 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18403
2016/06/23 07:52:15 [DEBUG] memberlist: TCP connection from=127.0.0.1:33552
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 202 127.0.0.1
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 203 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: adding LAN server Node 202 (Addr: 127.0.0.1:18202) (DC: dc1)
2016/06/23 07:52:15 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/06/23 07:52:15 [INFO] consul: adding LAN server Node 203 (Addr: 127.0.0.1:18203) (DC: dc1)
2016/06/23 07:52:15 [INFO] agent: requesting shutdown
2016/06/23 07:52:15 [INFO] consul: shutting down server
2016/06/23 07:52:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:18203 [Candidate] entering Candidate state
2016/06/23 07:52:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:15 [DEBUG] raft: Vote granted from 127.0.0.1:18202. Tally: 1
2016/06/23 07:52:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:18202 [Leader] entering Leader state
2016/06/23 07:52:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:15 [INFO] consul: New leader elected: Node 202
2016/06/23 07:52:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:15 [INFO] memberlist: Suspect Node 203 has failed, no acks received
2016/06/23 07:52:15 [DEBUG] raft: Node 127.0.0.1:18202 updated peer set (2): [127.0.0.1:18202]
2016/06/23 07:52:15 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:15 [INFO] memberlist: Suspect Node 203 has failed, no acks received
2016/06/23 07:52:15 [INFO] memberlist: Marking Node 203 as failed, suspect timeout reached
2016/06/23 07:52:15 [INFO] serf: EventMemberFailed: Node 203 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: removing LAN server Node 203 (Addr: 127.0.0.1:18203) (DC: dc1)
2016/06/23 07:52:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:15 [INFO] agent: shutdown complete
2016/06/23 07:52:15 [INFO] agent: requesting shutdown
2016/06/23 07:52:15 [INFO] consul: shutting down server
2016/06/23 07:52:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:15 [INFO] memberlist: Suspect Node 203 has failed, no acks received
2016/06/23 07:52:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:15 [ERR] consul: ACL initialization failed: failed to create master token: leadership lost while committing log
2016/06/23 07:52:15 [ERR] consul: failed to establish leadership: failed to create master token: leadership lost while committing log
2016/06/23 07:52:15 [INFO] agent: shutdown complete
--- PASS: TestRPCClientJoinLAN (1.98s)
=== RUN   TestRPCClientJoinWAN
2016/06/23 07:52:16 [INFO] raft: Node at 127.0.0.1:18204 [Follower] entering Follower state
2016/06/23 07:52:16 [INFO] serf: EventMemberJoin: Node 204 127.0.0.1
2016/06/23 07:52:16 [INFO] consul: adding LAN server Node 204 (Addr: 127.0.0.1:18204) (DC: dc1)
2016/06/23 07:52:16 [INFO] serf: EventMemberJoin: Node 204.dc1 127.0.0.1
2016/06/23 07:52:16 [INFO] consul: adding WAN server Node 204.dc1 (Addr: 127.0.0.1:18204) (DC: dc1)
2016/06/23 07:52:16 [INFO] agent.rpc: Accepted client: 127.0.0.1:52928
2016/06/23 07:52:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:16 [INFO] raft: Node at 127.0.0.1:18204 [Candidate] entering Candidate state
2016/06/23 07:52:17 [INFO] raft: Node at 127.0.0.1:18205 [Follower] entering Follower state
2016/06/23 07:52:17 [INFO] serf: EventMemberJoin: Node 205 127.0.0.1
2016/06/23 07:52:17 [INFO] consul: adding LAN server Node 205 (Addr: 127.0.0.1:18205) (DC: dc1)
2016/06/23 07:52:17 [INFO] serf: EventMemberJoin: Node 205.dc1 127.0.0.1
2016/06/23 07:52:17 [INFO] consul: adding WAN server Node 205.dc1 (Addr: 127.0.0.1:18205) (DC: dc1)
2016/06/23 07:52:17 [INFO] agent.rpc: Accepted client: 127.0.0.1:57260
2016/06/23 07:52:17 [INFO] agent: (WAN) joining: [127.0.0.1:18605]
2016/06/23 07:52:17 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18605
2016/06/23 07:52:17 [DEBUG] memberlist: TCP connection from=127.0.0.1:55970
2016/06/23 07:52:17 [INFO] serf: EventMemberJoin: Node 204.dc1 127.0.0.1
2016/06/23 07:52:17 [INFO] serf: EventMemberJoin: Node 205.dc1 127.0.0.1
2016/06/23 07:52:17 [INFO] consul: adding WAN server Node 204.dc1 (Addr: 127.0.0.1:18204) (DC: dc1)
2016/06/23 07:52:17 [INFO] consul: adding WAN server Node 205.dc1 (Addr: 127.0.0.1:18205) (DC: dc1)
2016/06/23 07:52:17 [INFO] agent: (WAN) joined: 1 Err: <nil>
2016/06/23 07:52:17 [INFO] agent: requesting shutdown
2016/06/23 07:52:17 [INFO] consul: shutting down server
2016/06/23 07:52:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:17 [INFO] raft: Node at 127.0.0.1:18205 [Candidate] entering Candidate state
2016/06/23 07:52:17 [DEBUG] serf: messageJoinType: Node 204.dc1
2016/06/23 07:52:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:17 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:17 [DEBUG] raft: Vote granted from 127.0.0.1:18204. Tally: 1
2016/06/23 07:52:17 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:17 [INFO] raft: Node at 127.0.0.1:18204 [Leader] entering Leader state
2016/06/23 07:52:17 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:17 [INFO] consul: New leader elected: Node 204
2016/06/23 07:52:17 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:17 [INFO] memberlist: Suspect Node 205.dc1 has failed, no acks received
2016/06/23 07:52:17 [DEBUG] raft: Node 127.0.0.1:18204 updated peer set (2): [127.0.0.1:18204]
2016/06/23 07:52:17 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:17 [INFO] memberlist: Suspect Node 205.dc1 has failed, no acks received
2016/06/23 07:52:17 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:17 [INFO] agent: shutdown complete
2016/06/23 07:52:17 [INFO] agent: requesting shutdown
2016/06/23 07:52:17 [INFO] consul: shutting down server
2016/06/23 07:52:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:17 [INFO] memberlist: Marking Node 205.dc1 as failed, suspect timeout reached
2016/06/23 07:52:17 [INFO] serf: EventMemberFailed: Node 205.dc1 127.0.0.1
2016/06/23 07:52:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:17 [ERR] consul: ACL initialization failed: failed to create master token: leadership lost while committing log
2016/06/23 07:52:17 [ERR] consul: failed to establish leadership: failed to create master token: leadership lost while committing log
2016/06/23 07:52:17 [INFO] memberlist: Suspect Node 205.dc1 has failed, no acks received
2016/06/23 07:52:17 [INFO] agent: shutdown complete
--- PASS: TestRPCClientJoinWAN (2.11s)
=== RUN   TestRPCClientLANMembers
2016/06/23 07:52:18 [INFO] raft: Node at 127.0.0.1:18206 [Follower] entering Follower state
2016/06/23 07:52:18 [INFO] serf: EventMemberJoin: Node 206 127.0.0.1
2016/06/23 07:52:18 [INFO] consul: adding LAN server Node 206 (Addr: 127.0.0.1:18206) (DC: dc1)
2016/06/23 07:52:18 [INFO] serf: EventMemberJoin: Node 206.dc1 127.0.0.1
2016/06/23 07:52:18 [INFO] consul: adding WAN server Node 206.dc1 (Addr: 127.0.0.1:18206) (DC: dc1)
2016/06/23 07:52:18 [INFO] agent.rpc: Accepted client: 127.0.0.1:38758
2016/06/23 07:52:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:18 [INFO] raft: Node at 127.0.0.1:18206 [Candidate] entering Candidate state
2016/06/23 07:52:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:19 [DEBUG] raft: Vote granted from 127.0.0.1:18206. Tally: 1
2016/06/23 07:52:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:18206 [Leader] entering Leader state
2016/06/23 07:52:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:19 [INFO] consul: New leader elected: Node 206
2016/06/23 07:52:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:18207 [Follower] entering Follower state
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 207 127.0.0.1
2016/06/23 07:52:19 [INFO] consul: adding LAN server Node 207 (Addr: 127.0.0.1:18207) (DC: dc1)
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 207.dc1 127.0.0.1
2016/06/23 07:52:19 [INFO] consul: adding WAN server Node 207.dc1 (Addr: 127.0.0.1:18207) (DC: dc1)
2016/06/23 07:52:19 [INFO] agent.rpc: Accepted client: 127.0.0.1:46844
2016/06/23 07:52:19 [INFO] agent: (LAN) joining: [127.0.0.1:18407]
2016/06/23 07:52:19 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18407
2016/06/23 07:52:19 [DEBUG] memberlist: TCP connection from=127.0.0.1:50160
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 207 127.0.0.1
2016/06/23 07:52:19 [INFO] agent: (LAN) joined: 1 Err: <nil>
2016/06/23 07:52:19 [INFO] consul: adding LAN server Node 207 (Addr: 127.0.0.1:18207) (DC: dc1)
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 206 127.0.0.1
2016/06/23 07:52:19 [INFO] agent: requesting shutdown
2016/06/23 07:52:19 [INFO] consul: shutting down server
2016/06/23 07:52:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:19 [DEBUG] raft: Node 127.0.0.1:18206 updated peer set (2): [127.0.0.1:18206]
2016/06/23 07:52:19 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:18207 [Candidate] entering Candidate state
2016/06/23 07:52:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:19 [INFO] memberlist: Suspect Node 207 has failed, no acks received
2016/06/23 07:52:19 [INFO] consul: member 'Node 206' joined, marking health alive
2016/06/23 07:52:19 [INFO] memberlist: Suspect Node 207 has failed, no acks received
2016/06/23 07:52:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:19 [INFO] agent: shutdown complete
2016/06/23 07:52:19 [ERR] consul: 'Node 207' and 'Node 206' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:52:19 [INFO] consul: member 'Node 207' joined, marking health alive
2016/06/23 07:52:19 [INFO] agent: requesting shutdown
2016/06/23 07:52:19 [INFO] consul: shutting down server
2016/06/23 07:52:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:20 [INFO] memberlist: Marking Node 207 as failed, suspect timeout reached
2016/06/23 07:52:20 [INFO] serf: EventMemberFailed: Node 207 127.0.0.1
2016/06/23 07:52:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:20 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:52:20 [ERR] consul: failed to reconcile member: {Node 207 127.0.0.1 18407 map[vsn:2 vsn_min:1 vsn_max:3 build:a.bc.d: port:18207 bootstrap:1 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:52:20 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:52:20 [INFO] agent: shutdown complete
--- PASS: TestRPCClientLANMembers (2.18s)
=== RUN   TestRPCClientWANMembers
2016/06/23 07:52:20 [INFO] raft: Node at 127.0.0.1:18208 [Follower] entering Follower state
2016/06/23 07:52:20 [INFO] serf: EventMemberJoin: Node 208 127.0.0.1
2016/06/23 07:52:20 [INFO] consul: adding LAN server Node 208 (Addr: 127.0.0.1:18208) (DC: dc1)
2016/06/23 07:52:20 [INFO] serf: EventMemberJoin: Node 208.dc1 127.0.0.1
2016/06/23 07:52:20 [INFO] consul: adding WAN server Node 208.dc1 (Addr: 127.0.0.1:18208) (DC: dc1)
2016/06/23 07:52:20 [INFO] agent.rpc: Accepted client: 127.0.0.1:48646
2016/06/23 07:52:20 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:20 [INFO] raft: Node at 127.0.0.1:18208 [Candidate] entering Candidate state
2016/06/23 07:52:21 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:21 [DEBUG] raft: Vote granted from 127.0.0.1:18208. Tally: 1
2016/06/23 07:52:21 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:21 [INFO] raft: Node at 127.0.0.1:18208 [Leader] entering Leader state
2016/06/23 07:52:21 [INFO] raft: Node at 127.0.0.1:18209 [Follower] entering Follower state
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 209 127.0.0.1
2016/06/23 07:52:21 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:21 [INFO] consul: New leader elected: Node 208
2016/06/23 07:52:21 [INFO] consul: adding LAN server Node 209 (Addr: 127.0.0.1:18209) (DC: dc1)
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 209.dc1 127.0.0.1
2016/06/23 07:52:21 [INFO] consul: adding WAN server Node 209.dc1 (Addr: 127.0.0.1:18209) (DC: dc1)
2016/06/23 07:52:21 [INFO] agent.rpc: Accepted client: 127.0.0.1:38306
2016/06/23 07:52:21 [INFO] agent: (WAN) joining: [127.0.0.1:18609]
2016/06/23 07:52:21 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18609
2016/06/23 07:52:21 [DEBUG] memberlist: TCP connection from=127.0.0.1:39584
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 208.dc1 127.0.0.1
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 209.dc1 127.0.0.1
2016/06/23 07:52:21 [INFO] consul: adding WAN server Node 208.dc1 (Addr: 127.0.0.1:18208) (DC: dc1)
2016/06/23 07:52:21 [INFO] consul: adding WAN server Node 209.dc1 (Addr: 127.0.0.1:18209) (DC: dc1)
2016/06/23 07:52:21 [INFO] agent: (WAN) joined: 1 Err: <nil>
2016/06/23 07:52:21 [INFO] agent: requesting shutdown
2016/06/23 07:52:21 [INFO] consul: shutting down server
2016/06/23 07:52:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:21 [INFO] raft: Node at 127.0.0.1:18209 [Candidate] entering Candidate state
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [DEBUG] serf: messageJoinType: Node 208.dc1
2016/06/23 07:52:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:21 [DEBUG] raft: Node 127.0.0.1:18208 updated peer set (2): [127.0.0.1:18208]
2016/06/23 07:52:21 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:21 [INFO] memberlist: Suspect Node 209.dc1 has failed, no acks received
2016/06/23 07:52:21 [INFO] memberlist: Suspect Node 209.dc1 has failed, no acks received
2016/06/23 07:52:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:22 [INFO] agent: shutdown complete
2016/06/23 07:52:22 [INFO] agent: requesting shutdown
2016/06/23 07:52:22 [INFO] consul: shutting down server
2016/06/23 07:52:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:22 [INFO] memberlist: Marking Node 209.dc1 as failed, suspect timeout reached
2016/06/23 07:52:22 [INFO] serf: EventMemberFailed: Node 209.dc1 127.0.0.1
2016/06/23 07:52:22 [INFO] memberlist: Suspect Node 209.dc1 has failed, no acks received
2016/06/23 07:52:22 [INFO] consul: member 'Node 208' joined, marking health alive
2016/06/23 07:52:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:22 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:52:22 [ERR] consul: failed to reconcile member: {Node 208 127.0.0.1 18408 map[vsn:2 vsn_min:1 vsn_max:3 build:a.bc.d: port:18208 bootstrap:1 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:52:22 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:52:22 [INFO] agent: shutdown complete
--- PASS: TestRPCClientWANMembers (2.26s)
=== RUN   TestRPCClientStats
2016/06/23 07:52:22 [INFO] raft: Node at 127.0.0.1:18210 [Follower] entering Follower state
2016/06/23 07:52:22 [INFO] serf: EventMemberJoin: Node 210 127.0.0.1
2016/06/23 07:52:22 [INFO] consul: adding LAN server Node 210 (Addr: 127.0.0.1:18210) (DC: dc1)
2016/06/23 07:52:22 [INFO] serf: EventMemberJoin: Node 210.dc1 127.0.0.1
2016/06/23 07:52:22 [INFO] consul: adding WAN server Node 210.dc1 (Addr: 127.0.0.1:18210) (DC: dc1)
2016/06/23 07:52:22 [INFO] agent.rpc: Accepted client: 127.0.0.1:58860
2016/06/23 07:52:22 [INFO] agent: requesting shutdown
2016/06/23 07:52:22 [INFO] consul: shutting down server
2016/06/23 07:52:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:22 [INFO] raft: Node at 127.0.0.1:18210 [Candidate] entering Candidate state
2016/06/23 07:52:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:23 [INFO] agent: shutdown complete
--- PASS: TestRPCClientStats (1.15s)
=== RUN   TestRPCClientLeave
2016/06/23 07:52:24 [INFO] raft: Node at 127.0.0.1:18211 [Follower] entering Follower state
2016/06/23 07:52:24 [INFO] serf: EventMemberJoin: Node 211 127.0.0.1
2016/06/23 07:52:24 [INFO] consul: adding LAN server Node 211 (Addr: 127.0.0.1:18211) (DC: dc1)
2016/06/23 07:52:24 [INFO] serf: EventMemberJoin: Node 211.dc1 127.0.0.1
2016/06/23 07:52:24 [INFO] consul: adding WAN server Node 211.dc1 (Addr: 127.0.0.1:18211) (DC: dc1)
2016/06/23 07:52:24 [INFO] agent.rpc: Accepted client: 127.0.0.1:55518
2016/06/23 07:52:24 [INFO] agent.rpc: Graceful leave triggered
2016/06/23 07:52:24 [INFO] consul: server starting leave
2016/06/23 07:52:24 [INFO] serf: EventMemberLeave: Node 211.dc1 127.0.0.1
2016/06/23 07:52:24 [INFO] serf: EventMemberLeave: Node 211 127.0.0.1
2016/06/23 07:52:24 [INFO] agent: requesting shutdown
2016/06/23 07:52:24 [INFO] consul: shutting down server
2016/06/23 07:52:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:24 [INFO] raft: Node at 127.0.0.1:18211 [Candidate] entering Candidate state
2016/06/23 07:52:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:25 [INFO] agent: shutdown complete
--- FAIL: TestRPCClientLeave (1.87s)
	rpc_client_test.go:264: agent should be shutdown!
=== RUN   TestRPCClientMonitor
2016/06/23 07:52:26 [DEBUG] memberlist: Potential blocking operation. Last command took 16.612842ms
2016/06/23 07:52:26 [INFO] serf: EventMemberJoin: Node 212 127.0.0.1
2016/06/23 07:52:26 [INFO] raft: Node at 127.0.0.1:18212 [Follower] entering Follower state
2016/06/23 07:52:26 [INFO] consul: adding LAN server Node 212 (Addr: 127.0.0.1:18212) (DC: dc1)
2016/06/23 07:52:26 [INFO] serf: EventMemberJoin: Node 212.dc1 127.0.0.1
2016/06/23 07:52:26 [INFO] consul: adding WAN server Node 212.dc1 (Addr: 127.0.0.1:18212) (DC: dc1)
2016/06/23 07:52:26 [INFO] agent.rpc: Accepted client: 127.0.0.1:41408
2016/06/23 07:52:26 [INFO] agent: (LAN) joining: []
2016/06/23 07:52:26 [INFO] agent: (LAN) joined: 0 Err: <nil>
2016/06/23 07:52:26 [INFO] agent: requesting shutdown
2016/06/23 07:52:26 [INFO] consul: shutting down server
2016/06/23 07:52:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:26 [INFO] raft: Node at 127.0.0.1:18212 [Candidate] entering Candidate state
2016/06/23 07:52:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:27 [INFO] agent: shutdown complete
--- PASS: TestRPCClientMonitor (1.65s)
=== RUN   TestRPCClientListKeys
2016/06/23 07:52:27 [INFO] raft: Node at 127.0.0.1:18213 [Follower] entering Follower state
2016/06/23 07:52:27 [INFO] serf: EventMemberJoin: Node 213 127.0.0.1
2016/06/23 07:52:27 [INFO] serf: EventMemberJoin: Node 213.dc1 127.0.0.1
2016/06/23 07:52:27 [INFO] agent.rpc: Accepted client: 127.0.0.1:37086
2016/06/23 07:52:27 [INFO] consul: adding WAN server Node 213.dc1 (Addr: 127.0.0.1:18213) (DC: dc1)
2016/06/23 07:52:27 [INFO] consul: adding LAN server Node 213 (Addr: 127.0.0.1:18213) (DC: dc1)
2016/06/23 07:52:27 [INFO] serf: Received list-keys query
2016/06/23 07:52:27 [DEBUG] serf: messageQueryResponseType: Node 213.dc1
2016/06/23 07:52:27 [INFO] serf: Received list-keys query
2016/06/23 07:52:27 [DEBUG] serf: messageQueryResponseType: Node 213
2016/06/23 07:52:27 [INFO] agent: requesting shutdown
2016/06/23 07:52:27 [INFO] consul: shutting down server
2016/06/23 07:52:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:27 [INFO] raft: Node at 127.0.0.1:18213 [Candidate] entering Candidate state
2016/06/23 07:52:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:28 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:28 [INFO] agent: shutdown complete
--- PASS: TestRPCClientListKeys (1.42s)
=== RUN   TestRPCClientInstallKey
2016/06/23 07:52:29 [INFO] raft: Node at 127.0.0.1:18214 [Follower] entering Follower state
2016/06/23 07:52:29 [INFO] serf: EventMemberJoin: Node 214 127.0.0.1
2016/06/23 07:52:29 [INFO] consul: adding LAN server Node 214 (Addr: 127.0.0.1:18214) (DC: dc1)
2016/06/23 07:52:29 [INFO] serf: EventMemberJoin: Node 214.dc1 127.0.0.1
2016/06/23 07:52:29 [INFO] consul: adding WAN server Node 214.dc1 (Addr: 127.0.0.1:18214) (DC: dc1)
2016/06/23 07:52:29 [INFO] agent.rpc: Accepted client: 127.0.0.1:46948
2016/06/23 07:52:29 [INFO] serf: Received list-keys query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214.dc1
2016/06/23 07:52:29 [INFO] serf: Received list-keys query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214
2016/06/23 07:52:29 [INFO] serf: Received install-key query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214.dc1
2016/06/23 07:52:29 [INFO] serf: Received install-key query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214
2016/06/23 07:52:29 [INFO] serf: Received list-keys query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214.dc1
2016/06/23 07:52:29 [INFO] serf: Received list-keys query
2016/06/23 07:52:29 [DEBUG] serf: messageQueryResponseType: Node 214
2016/06/23 07:52:29 [INFO] agent: requesting shutdown
2016/06/23 07:52:29 [INFO] consul: shutting down server
2016/06/23 07:52:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:29 [INFO] raft: Node at 127.0.0.1:18214 [Candidate] entering Candidate state
2016/06/23 07:52:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:29 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:29 [INFO] agent: shutdown complete
--- PASS: TestRPCClientInstallKey (1.38s)
=== RUN   TestRPCClientUseKey
2016/06/23 07:52:30 [DEBUG] memberlist: Potential blocking operation. Last command took 30.816943ms
2016/06/23 07:52:30 [INFO] raft: Node at 127.0.0.1:18215 [Follower] entering Follower state
2016/06/23 07:52:30 [INFO] serf: EventMemberJoin: Node 215 127.0.0.1
2016/06/23 07:52:30 [INFO] consul: adding LAN server Node 215 (Addr: 127.0.0.1:18215) (DC: dc1)
2016/06/23 07:52:30 [INFO] serf: EventMemberJoin: Node 215.dc1 127.0.0.1
2016/06/23 07:52:30 [INFO] consul: adding WAN server Node 215.dc1 (Addr: 127.0.0.1:18215) (DC: dc1)
2016/06/23 07:52:30 [INFO] agent.rpc: Accepted client: 127.0.0.1:37944
2016/06/23 07:52:30 [INFO] serf: Received install-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215.dc1
2016/06/23 07:52:30 [INFO] serf: Received install-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215
2016/06/23 07:52:30 [INFO] serf: Received list-keys query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215.dc1
2016/06/23 07:52:30 [INFO] serf: Received list-keys query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215
2016/06/23 07:52:30 [INFO] serf: Received remove-key query
2016/06/23 07:52:30 [ERR] serf: Failed to remove key: Removing the primary key is not allowed
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215.dc1
2016/06/23 07:52:30 [INFO] serf: Received remove-key query
2016/06/23 07:52:30 [ERR] serf: Failed to remove key: Removing the primary key is not allowed
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215
2016/06/23 07:52:30 [INFO] serf: Received use-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215.dc1
2016/06/23 07:52:30 [INFO] serf: Received use-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215
2016/06/23 07:52:30 [INFO] serf: Received remove-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215.dc1
2016/06/23 07:52:30 [INFO] serf: Received remove-key query
2016/06/23 07:52:30 [DEBUG] serf: messageQueryResponseType: Node 215
2016/06/23 07:52:30 [INFO] agent: requesting shutdown
2016/06/23 07:52:30 [INFO] consul: shutting down server
2016/06/23 07:52:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:30 [INFO] raft: Node at 127.0.0.1:18215 [Candidate] entering Candidate state
2016/06/23 07:52:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:31 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:31 [INFO] agent: shutdown complete
--- PASS: TestRPCClientUseKey (1.22s)
=== RUN   TestRPCClientKeyOperation_encryptionDisabled
2016/06/23 07:52:32 [INFO] raft: Node at 127.0.0.1:18216 [Follower] entering Follower state
2016/06/23 07:52:32 [INFO] serf: EventMemberJoin: Node 216 127.0.0.1
2016/06/23 07:52:32 [INFO] consul: adding LAN server Node 216 (Addr: 127.0.0.1:18216) (DC: dc1)
2016/06/23 07:52:32 [INFO] serf: EventMemberJoin: Node 216.dc1 127.0.0.1
2016/06/23 07:52:32 [INFO] consul: adding WAN server Node 216.dc1 (Addr: 127.0.0.1:18216) (DC: dc1)
2016/06/23 07:52:32 [INFO] agent.rpc: Accepted client: 127.0.0.1:48706
2016/06/23 07:52:32 [ERR] serf: Keyring is empty (encryption not enabled)
2016/06/23 07:52:32 [DEBUG] serf: messageQueryResponseType: Node 216.dc1
2016/06/23 07:52:32 [ERR] serf: Keyring is empty (encryption not enabled)
2016/06/23 07:52:32 [DEBUG] serf: messageQueryResponseType: Node 216
2016/06/23 07:52:32 [INFO] agent: requesting shutdown
2016/06/23 07:52:32 [INFO] consul: shutting down server
2016/06/23 07:52:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:32 [INFO] raft: Node at 127.0.0.1:18216 [Candidate] entering Candidate state
2016/06/23 07:52:32 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:32 [DEBUG] memberlist: Potential blocking operation. Last command took 50.268872ms
2016/06/23 07:52:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:32 [INFO] agent: shutdown complete
--- PASS: TestRPCClientKeyOperation_encryptionDisabled (1.67s)
=== RUN   TestRPCLogStream
--- PASS: TestRPCLogStream (0.01s)
=== RUN   TestProviderService
--- PASS: TestProviderService (0.00s)
=== RUN   TestProviderConfig
--- PASS: TestProviderConfig (0.00s)
=== RUN   TestSCADAListener
--- PASS: TestSCADAListener (0.00s)
=== RUN   TestSCADAAddr
--- PASS: TestSCADAAddr (0.00s)
=== RUN   TestSessionCreate
2016/06/23 07:52:33 [INFO] raft: Node at 127.0.0.1:18217 [Follower] entering Follower state
2016/06/23 07:52:33 [INFO] serf: EventMemberJoin: Node 217 127.0.0.1
2016/06/23 07:52:33 [INFO] consul: adding LAN server Node 217 (Addr: 127.0.0.1:18217) (DC: dc1)
2016/06/23 07:52:33 [INFO] serf: EventMemberJoin: Node 217.dc1 127.0.0.1
2016/06/23 07:52:33 [INFO] consul: adding WAN server Node 217.dc1 (Addr: 127.0.0.1:18217) (DC: dc1)
2016/06/23 07:52:33 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:33 [INFO] raft: Node at 127.0.0.1:18217 [Candidate] entering Candidate state
2016/06/23 07:52:34 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:34 [DEBUG] raft: Vote granted from 127.0.0.1:18217. Tally: 1
2016/06/23 07:52:34 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:34 [INFO] raft: Node at 127.0.0.1:18217 [Leader] entering Leader state
2016/06/23 07:52:34 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:34 [INFO] consul: New leader elected: Node 217
2016/06/23 07:52:34 [ERR] scada-client: failed to handshake: invalid token
2016/06/23 07:52:34 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:34 [DEBUG] raft: Node 127.0.0.1:18217 updated peer set (2): [127.0.0.1:18217]
2016/06/23 07:52:34 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:35 [INFO] consul: member 'Node 217' joined, marking health alive
2016/06/23 07:52:36 [INFO] agent: requesting shutdown
2016/06/23 07:52:36 [INFO] consul: shutting down server
2016/06/23 07:52:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:36 [INFO] agent: shutdown complete
2016/06/23 07:52:36 [DEBUG] http: Shutting down http server (127.0.0.1:19017)
--- PASS: TestSessionCreate (3.63s)
=== RUN   TestSessionCreateDelete
2016/06/23 07:52:37 [INFO] raft: Node at 127.0.0.1:18218 [Follower] entering Follower state
2016/06/23 07:52:37 [INFO] serf: EventMemberJoin: Node 218 127.0.0.1
2016/06/23 07:52:37 [INFO] consul: adding LAN server Node 218 (Addr: 127.0.0.1:18218) (DC: dc1)
2016/06/23 07:52:37 [INFO] serf: EventMemberJoin: Node 218.dc1 127.0.0.1
2016/06/23 07:52:37 [INFO] consul: adding WAN server Node 218.dc1 (Addr: 127.0.0.1:18218) (DC: dc1)
2016/06/23 07:52:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:37 [INFO] raft: Node at 127.0.0.1:18218 [Candidate] entering Candidate state
2016/06/23 07:52:37 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:37 [DEBUG] raft: Vote granted from 127.0.0.1:18218. Tally: 1
2016/06/23 07:52:37 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:37 [INFO] raft: Node at 127.0.0.1:18218 [Leader] entering Leader state
2016/06/23 07:52:37 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:37 [INFO] consul: New leader elected: Node 218
2016/06/23 07:52:38 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:38 [DEBUG] raft: Node 127.0.0.1:18218 updated peer set (2): [127.0.0.1:18218]
2016/06/23 07:52:38 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:38 [INFO] consul: member 'Node 218' joined, marking health alive
2016/06/23 07:52:39 [INFO] agent: requesting shutdown
2016/06/23 07:52:39 [INFO] consul: shutting down server
2016/06/23 07:52:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:39 [INFO] agent: shutdown complete
2016/06/23 07:52:39 [DEBUG] http: Shutting down http server (127.0.0.1:19018)
--- PASS: TestSessionCreateDelete (3.50s)
=== RUN   TestFixupLockDelay
--- PASS: TestFixupLockDelay (0.00s)
=== RUN   TestSessionDestroy
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:18219 [Follower] entering Follower state
2016/06/23 07:52:40 [INFO] serf: EventMemberJoin: Node 219 127.0.0.1
2016/06/23 07:52:40 [INFO] consul: adding LAN server Node 219 (Addr: 127.0.0.1:18219) (DC: dc1)
2016/06/23 07:52:40 [INFO] serf: EventMemberJoin: Node 219.dc1 127.0.0.1
2016/06/23 07:52:40 [INFO] consul: adding WAN server Node 219.dc1 (Addr: 127.0.0.1:18219) (DC: dc1)
2016/06/23 07:52:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:18219 [Candidate] entering Candidate state
2016/06/23 07:52:40 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:40 [DEBUG] raft: Vote granted from 127.0.0.1:18219. Tally: 1
2016/06/23 07:52:40 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:18219 [Leader] entering Leader state
2016/06/23 07:52:40 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:40 [INFO] consul: New leader elected: Node 219
2016/06/23 07:52:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:41 [DEBUG] raft: Node 127.0.0.1:18219 updated peer set (2): [127.0.0.1:18219]
2016/06/23 07:52:41 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:41 [INFO] consul: member 'Node 219' joined, marking health alive
2016/06/23 07:52:42 [INFO] agent: requesting shutdown
2016/06/23 07:52:42 [INFO] consul: shutting down server
2016/06/23 07:52:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:42 [INFO] agent: shutdown complete
2016/06/23 07:52:42 [DEBUG] http: Shutting down http server (127.0.0.1:19019)
--- PASS: TestSessionDestroy (2.31s)
=== RUN   TestSessionTTL
2016/06/23 07:52:42 [INFO] raft: Node at 127.0.0.1:18220 [Follower] entering Follower state
2016/06/23 07:52:42 [INFO] serf: EventMemberJoin: Node 220 127.0.0.1
2016/06/23 07:52:42 [INFO] consul: adding LAN server Node 220 (Addr: 127.0.0.1:18220) (DC: dc1)
2016/06/23 07:52:42 [INFO] serf: EventMemberJoin: Node 220.dc1 127.0.0.1
2016/06/23 07:52:42 [INFO] consul: adding WAN server Node 220.dc1 (Addr: 127.0.0.1:18220) (DC: dc1)
2016/06/23 07:52:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:42 [INFO] raft: Node at 127.0.0.1:18220 [Candidate] entering Candidate state
2016/06/23 07:52:43 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:43 [DEBUG] raft: Vote granted from 127.0.0.1:18220. Tally: 1
2016/06/23 07:52:43 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:43 [INFO] raft: Node at 127.0.0.1:18220 [Leader] entering Leader state
2016/06/23 07:52:43 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:43 [INFO] consul: New leader elected: Node 220
2016/06/23 07:52:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:43 [DEBUG] raft: Node 127.0.0.1:18220 updated peer set (2): [127.0.0.1:18220]
2016/06/23 07:52:43 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:44 [INFO] consul: member 'Node 220' joined, marking health alive
2016/06/23 07:52:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:52:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:38984
2016/06/23 07:53:00 [DEBUG] memberlist: Potential blocking operation. Last command took 71.594542ms
2016/06/23 07:53:03 [DEBUG] memberlist: Potential blocking operation. Last command took 54.833692ms
2016/06/23 07:53:04 [DEBUG] consul.state: Session 5ce8c266-0be6-c641-380a-9127781d1061 TTL expired
2016/06/23 07:53:05 [DEBUG] memberlist: Potential blocking operation. Last command took 62.34259ms
2016/06/23 07:53:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:53:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:51354
2016/06/23 07:53:10 [DEBUG] memberlist: Potential blocking operation. Last command took 77.109379ms
2016/06/23 07:53:12 [DEBUG] memberlist: Potential blocking operation. Last command took 36.628464ms
2016/06/23 07:53:14 [INFO] agent: requesting shutdown
2016/06/23 07:53:14 [INFO] consul: shutting down server
2016/06/23 07:53:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:15 [INFO] agent: shutdown complete
2016/06/23 07:53:15 [DEBUG] http: Shutting down http server (127.0.0.1:19020)
--- PASS: TestSessionTTL (32.81s)
=== RUN   TestSessionTTLConfig
2016/06/23 07:53:15 [DEBUG] memberlist: Potential blocking operation. Last command took 32.64734ms
2016/06/23 07:53:15 [INFO] raft: Node at 127.0.0.1:18221 [Follower] entering Follower state
2016/06/23 07:53:15 [INFO] serf: EventMemberJoin: Node 221 127.0.0.1
2016/06/23 07:53:15 [INFO] consul: adding LAN server Node 221 (Addr: 127.0.0.1:18221) (DC: dc1)
2016/06/23 07:53:15 [INFO] serf: EventMemberJoin: Node 221.dc1 127.0.0.1
2016/06/23 07:53:15 [INFO] consul: adding WAN server Node 221.dc1 (Addr: 127.0.0.1:18221) (DC: dc1)
2016/06/23 07:53:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:15 [INFO] raft: Node at 127.0.0.1:18221 [Candidate] entering Candidate state
2016/06/23 07:53:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:16 [DEBUG] raft: Vote granted from 127.0.0.1:18221. Tally: 1
2016/06/23 07:53:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:16 [INFO] raft: Node at 127.0.0.1:18221 [Leader] entering Leader state
2016/06/23 07:53:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:16 [INFO] consul: New leader elected: Node 221
2016/06/23 07:53:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:16 [DEBUG] raft: Node 127.0.0.1:18221 updated peer set (2): [127.0.0.1:18221]
2016/06/23 07:53:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:16 [INFO] consul: member 'Node 221' joined, marking health alive
2016/06/23 07:53:19 [DEBUG] consul.state: Session 9c7f2627-16b8-91de-5a15-44e29a164491 TTL expired
2016/06/23 07:53:20 [INFO] agent: requesting shutdown
2016/06/23 07:53:20 [INFO] consul: shutting down server
2016/06/23 07:53:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:21 [INFO] agent: shutdown complete
2016/06/23 07:53:21 [DEBUG] http: Shutting down http server (127.0.0.1:19021)
--- FAIL: TestSessionTTLConfig (6.04s)
	session_endpoint_test.go:265: session '9c7f2627-16b8-91de-5a15-44e29a164491' should have been destroyed
=== RUN   TestSessionTTLRenew
2016/06/23 07:53:21 [INFO] raft: Node at 127.0.0.1:18222 [Follower] entering Follower state
2016/06/23 07:53:21 [INFO] serf: EventMemberJoin: Node 222 127.0.0.1
2016/06/23 07:53:21 [INFO] consul: adding LAN server Node 222 (Addr: 127.0.0.1:18222) (DC: dc1)
2016/06/23 07:53:21 [INFO] serf: EventMemberJoin: Node 222.dc1 127.0.0.1
2016/06/23 07:53:21 [INFO] consul: adding WAN server Node 222.dc1 (Addr: 127.0.0.1:18222) (DC: dc1)
2016/06/23 07:53:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:21 [INFO] raft: Node at 127.0.0.1:18222 [Candidate] entering Candidate state
2016/06/23 07:53:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:22 [DEBUG] raft: Vote granted from 127.0.0.1:18222. Tally: 1
2016/06/23 07:53:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:22 [INFO] raft: Node at 127.0.0.1:18222 [Leader] entering Leader state
2016/06/23 07:53:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:22 [INFO] consul: New leader elected: Node 222
2016/06/23 07:53:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:22 [DEBUG] raft: Node 127.0.0.1:18222 updated peer set (2): [127.0.0.1:18222]
2016/06/23 07:53:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:23 [INFO] consul: member 'Node 222' joined, marking health alive
2016/06/23 07:53:29 [DEBUG] memberlist: Potential blocking operation. Last command took 70.578511ms
2016/06/23 07:53:34 [DEBUG] memberlist: Potential blocking operation. Last command took 45.975752ms
2016/06/23 07:53:37 [DEBUG] memberlist: Potential blocking operation. Last command took 11.697361ms
2016/06/23 07:53:41 [DEBUG] memberlist: Potential blocking operation. Last command took 62.919941ms
2016/06/23 07:53:53 [DEBUG] consul.state: Session de4c372a-6a40-b8c7-7e2c-c69c3396e60a TTL expired
2016/06/23 07:53:55 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18501
2016/06/23 07:53:55 [DEBUG] memberlist: TCP connection from=127.0.0.1:39064
2016/06/23 07:54:06 [DEBUG] memberlist: Potential blocking operation. Last command took 23.920738ms
2016/06/23 07:54:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:18500
2016/06/23 07:54:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:51430
2016/06/23 07:54:13 [INFO] agent: requesting shutdown
2016/06/23 07:54:13 [INFO] consul: shutting down server
2016/06/23 07:54:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:13 [INFO] agent: shutdown complete
2016/06/23 07:54:13 [DEBUG] http: Shutting down http server (127.0.0.1:19022)
--- PASS: TestSessionTTLRenew (52.69s)
=== RUN   TestSessionGet
2016/06/23 07:54:14 [INFO] raft: Node at 127.0.0.1:18223 [Follower] entering Follower state
2016/06/23 07:54:14 [INFO] serf: EventMemberJoin: Node 223 127.0.0.1
2016/06/23 07:54:14 [INFO] consul: adding LAN server Node 223 (Addr: 127.0.0.1:18223) (DC: dc1)
2016/06/23 07:54:14 [INFO] serf: EventMemberJoin: Node 223.dc1 127.0.0.1
2016/06/23 07:54:14 [INFO] consul: adding WAN server Node 223.dc1 (Addr: 127.0.0.1:18223) (DC: dc1)
2016/06/23 07:54:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:54:14 [INFO] raft: Node at 127.0.0.1:18223 [Candidate] entering Candidate state
2016/06/23 07:54:14 [DEBUG] raft: Votes needed: 1
2016/06/23 07:54:14 [DEBUG] raft: Vote granted from 127.0.0.1:18223. Tally: 1
2016/06/23 07:54:14 [INFO] raft: Election won. Tally: 1
2016/06/23 07:54:14 [INFO] raft: Node at 127.0.0.1:18223 [Leader] entering Leader state
2016/06/23 07:54:14 [INFO] consul: cluster leadership acquired
2016/06/23 07:54:14 [INFO] consul: New leader elected: Node 223
2016/06/23 07:54:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:54:14 [DEBUG] raft: Node 127.0.0.1:18223 updated peer set (2): [127.0.0.1:18223]
2016/06/23 07:54:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:54:15 [INFO] consul: member 'Node 223' joined, marking health alive
2016/06/23 07:54:15 [INFO] agent: requesting shutdown
2016/06/23 07:54:15 [INFO] consul: shutting down server
2016/06/23 07:54:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:15 [INFO] agent: shutdown complete
2016/06/23 07:54:15 [DEBUG] http: Shutting down http server (127.0.0.1:19023)
2016/06/23 07:54:16 [INFO] raft: Node at 127.0.0.1:18224 [Follower] entering Follower state
2016/06/23 07:54:16 [INFO] serf: EventMemberJoin: Node 224 127.0.0.1
2016/06/23 07:54:16 [INFO] consul: adding LAN server Node 224 (Addr: 127.0.0.1:18224) (DC: dc1)
2016/06/23 07:54:16 [INFO] serf: EventMemberJoin: Node 224.dc1 127.0.0.1
2016/06/23 07:54:16 [INFO] consul: adding WAN server Node 224.dc1 (Addr: 127.0.0.1:18224) (DC: dc1)
2016/06/23 07:54:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:54:16 [INFO] raft: Node at 127.0.0.1:18224 [Candidate] entering Candidate state
2016/06/23 07:54:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:54:16 [DEBUG] raft: Vote granted from 127.0.0.1:18224. Tally: 1
2016/06/23 07:54:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:54:16 [INFO] raft: Node at 127.0.0.1:18224 [Leader] entering Leader state
2016/06/23 07:54:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:54:16 [INFO] consul: New leader elected: Node 224
2016/06/23 07:54:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:54:17 [DEBUG] raft: Node 127.0.0.1:18224 updated peer set (2): [127.0.0.1:18224]
2016/06/23 07:54:17 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:54:17 [INFO] consul: member 'Node 224' joined, marking health alive
2016/06/23 07:54:17 [INFO] agent: requesting shutdown
2016/06/23 07:54:17 [INFO] consul: shutting down server
2016/06/23 07:54:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:17 [INFO] agent: shutdown complete
2016/06/23 07:54:17 [DEBUG] http: Shutting down http server (127.0.0.1:19024)
--- PASS: TestSessionGet (3.99s)
=== RUN   TestSessionList
SIGQUIT: quit
PC=0x73888 m=0

goroutine 0 [idle]:
runtime.futex(0xc8a89c, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1f918, 0x0, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/runtime/sys_linux_arm.s:246 +0x1c
runtime.futexsleep(0xc8a89c, 0x0, 0xffffffff, 0xffffffff)
	/usr/lib/go-1.6/src/runtime/os1_linux.go:40 +0x68
runtime.notesleep(0xc8a89c)
	/usr/lib/go-1.6/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
	/usr/lib/go-1.6/src/runtime/proc.go:1538 +0x100
runtime.findrunnable(0x10e1c000, 0x0)
	/usr/lib/go-1.6/src/runtime/proc.go:1976 +0x7c8
runtime.schedule()
	/usr/lib/go-1.6/src/runtime/proc.go:2075 +0x26c
runtime.park_m(0x10f65040)
	/usr/lib/go-1.6/src/runtime/proc.go:2140 +0x16c
runtime.mcall(0xc8a300)
	/usr/lib/go-1.6/src/runtime/asm_arm.s:183 +0x5c

goroutine 1 [chan receive]:
testing.RunTests(0x9f5fd4, 0xc87e68, 0x11f, 0x11f, 0x735500)
	/usr/lib/go-1.6/src/testing/testing.go:583 +0x62c
testing.(*M).Run(0x10e33f7c, 0x10edf035)
	/usr/lib/go-1.6/src/testing/testing.go:515 +0x8c
main.main()
	github.com/hashicorp/consul/command/agent/_test/_testmain.go:626 +0x118

goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
	/usr/lib/go-1.6/src/runtime/asm_arm.s:990 +0x4

goroutine 6 [syscall, 9 minutes]:
os/signal.signal_recv(0x172)
	/usr/lib/go-1.6/src/runtime/sigqueue.go:116 +0x190
os/signal.loop()
	/usr/lib/go-1.6/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.1
	/usr/lib/go-1.6/src/os/signal/signal_unix.go:28 +0x30

goroutine 4441 [select]:
github.com/hashicorp/consul/command/agent.(*Agent).sendCoordinate(0x10efcc30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:612 +0x600
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:222 +0xe80

goroutine 4124 [IO wait]:
net.runtime_pollWait(0xb6476d48, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c2038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c2038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x111c2000, 0x12716000, 0xffff, 0xffff, 0x1116b920, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa4760, 0x12716000, 0xffff, 0xffff, 0x1116b920, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa4760, 0x12716000, 0xffff, 0xffff, 0x0, 0x12716000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f51e60, 0x10fa4760, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f51ed8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa4778, 0x10fa4760, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f51e60, 0x10fa4760, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f51e60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f51e60, 0x1116fa70, 0x1123f480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4312 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x111c4000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 3724 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e68ea0, 0x5f5e100, 0x0, 0x111ab640, 0x111ab5c0, 0x10e27178)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 4371 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x1162a480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a480, 0x11233720)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4719 [IO wait]:
net.runtime_pollWait(0xb54312c0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10eaeb38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10eaeb38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10eaeb00, 0x12706000, 0xffff, 0xffff, 0x10f32420, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x11232658, 0x12706000, 0xffff, 0xffff, 0x10f32420, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x11232658, 0x12706000, 0xffff, 0xffff, 0x0, 0x12706000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116be5a0, 0x11232658, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116be618, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112326c0, 0x11232658, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116be5a0, 0x11232658, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116be5a0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116be5a0, 0x111b3740, 0x114bfec0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4430 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10f82e10, 0x110b30c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 6329 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x1126a160)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 4425 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x11028400)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 3730 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10e6b5e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 3721 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10e68ea0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 4393 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10e6bdc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 4427 [IO wait]:
net.runtime_pollWait(0xb5431a40, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110b2df8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110b2df8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x110b2dc0, 0x11bfa000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10e27cd0, 0x11bfa000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10e27cd0, 0x11bfa000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10f82e10)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 4381 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x1156d4d0, 0x5f5e100, 0x0, 0x10f73000, 0x10f72f80, 0x11233988)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 6359 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1164fb80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4440 [select, 7 minutes]:
github.com/hashicorp/consul/command/agent.(*Agent).handleEvents(0x10efcc30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/user_event.go:111 +0x2a0
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:218 +0xe50

goroutine 3715 [select, 1 minutes]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x10f38360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10f38360, 0x10e26e08)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4429 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f82e10, 0x5f5e100, 0x0, 0x110b3100, 0x110b30c0, 0x10e27ea8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 4423 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10e6bdc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 4379 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x1156d4d0, 0x5f5e100, 0x0, 0x10f72fc0, 0x10f72f80, 0x11233978)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 4032 [IO wait]:
net.runtime_pollWait(0xb6477090, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111b0838, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111b0838, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x111b0800, 0x11798000, 0xffff, 0xffff, 0x124ea300, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e27878, 0x11798000, 0xffff, 0xffff, 0x124ea300, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e27878, 0x11798000, 0xffff, 0xffff, 0x0, 0x11798000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f51170, 0x10e27878, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f511e8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e27890, 0x10e27878, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f51170, 0x10e27878, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f51170, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f51170, 0x110d83f0, 0x111ab440)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3699 [sleep]:
time.Sleep(0x3b9aca00, 0x0)
	/usr/lib/go-1.6/src/runtime/time.go:59 +0x104
github.com/armon/go-metrics.(*Metrics).collectStats(0x11269c50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/metrics.go:67 +0x28
created by github.com/armon/go-metrics.New
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/start.go:61 +0xbc

goroutine 4415 [select]:
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10e6bdc0, 0x10f71840)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:105 +0x47c
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 6337 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb54311d0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x120c8e78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x120c8e78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x120c8e40, 0x0, 0xb6477a30, 0x10f4bf30)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111aed38, 0xb, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x11c48630)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 4577 [IO wait]:
net.runtime_pollWait(0xb5431590, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f736f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f736f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f736c0, 0x126e6000, 0xffff, 0xffff, 0x10f32270, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa46e0, 0x126e6000, 0xffff, 0xffff, 0x10f32270, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa46e0, 0x126e6000, 0xffff, 0xffff, 0x0, 0x126e6000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x111ef3b0, 0x10fa46e0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x111ef428, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa4710, 0x10fa46e0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x111ef3b0, 0x10fa46e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x111ef3b0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x111ef3b0, 0x1116e690, 0x10f72a00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3697 [select]:
github.com/hashicorp/consul/command/agent.(*Agent).sendCoordinate(0x10ef64b0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:612 +0x600
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:222 +0xe80

goroutine 3718 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10e12a00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 4320 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1114d680)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 3984 [IO wait]:
net.runtime_pollWait(0xb64771f8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1102c678, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1102c678, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1102c640, 0x11c0a000, 0xffff, 0xffff, 0x11203110, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e26160, 0x11c0a000, 0xffff, 0xffff, 0x11203110, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e26160, 0x11c0a000, 0xffff, 0xffff, 0x0, 0x11c0a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f3f050, 0x10e26160, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f3f0c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e26188, 0x10e26160, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f3f050, 0x10e26160, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3f050, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f3f050, 0x112005a0, 0x1104c800)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4406 [select, 7 minutes]:
github.com/hashicorp/consul/command/agent.(*Agent).handleEvents(0x10efca50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/user_event.go:111 +0x2a0
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:218 +0xe50

goroutine 4578 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431518, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f73738, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f73738, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f73700, 0x0, 0xb6477a30, 0x111b4060)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10fa4748, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10fa4748, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x111ef440, 0xb64778a8, 0x10fa4748, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x111ef440, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x111ef440, 0x1116e690, 0x10f72a40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3689 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1114c640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4175 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476c58, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1104c7b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1104c7b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1104c780, 0x0, 0xb6477a30, 0x1129e1c0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7a0f0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7a0f0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10e69cb0, 0xb64778a8, 0x10f7a0f0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10e69cb0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10e69cb0, 0x1118a600, 0x110b2980)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3693 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10e6b5e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 3630 [sleep]:
time.Sleep(0x3b9aca00, 0x0)
	/usr/lib/go-1.6/src/runtime/time.go:59 +0x104
github.com/armon/go-metrics.(*Metrics).collectStats(0x111d8600)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/metrics.go:67 +0x28
created by github.com/armon/go-metrics.New
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/start.go:61 +0xbc

goroutine 4435 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d900, 0x8856d0, 0x5, 0x10f4f660)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 3690 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c640, 0x885e40, 0x6, 0x112914a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 4318 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f505a0, 0x5f5e100, 0x0, 0x111faa40, 0x111fa9c0, 0x10e27958)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 3720 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb64775b8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111ab078, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111ab078, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x111ab040, 0x11438000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10e26f10, 0x11438000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10e26f10, 0x11438000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10e68ea0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 4375 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10e12b80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 3939 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477450, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1123fd78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1123fd78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1123fd40, 0x0, 0xb6477a30, 0x1129b1c0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11232788, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x11232788, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10e69290, 0xb64778a8, 0x11232788, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10e69290, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10e69290, 0x1122d560, 0x111fa500)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3686 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10e695f0, 0x111c2340)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 3696 [select, 7 minutes]:
github.com/hashicorp/consul/command/agent.(*Agent).handleEvents(0x10ef64b0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/user_event.go:111 +0x2a0
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:218 +0xe50

goroutine 3748 [IO wait]:
net.runtime_pollWait(0xb6476f28, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111abcf8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111abcf8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x111abcc0, 0x12600000, 0xffff, 0xffff, 0x1116abd0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e272c0, 0x12600000, 0xffff, 0xffff, 0x1116abd0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e272c0, 0x12600000, 0xffff, 0xffff, 0x0, 0x12600000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f3e6c0, 0x10e272c0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f3e738, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e272d8, 0x10e272c0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f3e6c0, 0x10e272c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3e6c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f3e6c0, 0x111d9e60, 0x111c2880)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3569 [runnable]:
github.com/hashicorp/raft.(*Raft).leaderLoop(0x10f38360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:874 +0xce8
github.com/hashicorp/raft.(*Raft).runLeader(0x10f38360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:838 +0x8a0
github.com/hashicorp/raft.(*Raft).run(0x10f38360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:602 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10f38360, 0x10e26df0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4421 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d7c0, 0x8856d0, 0x5, 0x10f4f000)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 3717 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x11290b20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 4431 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f82e10, 0x5f5e100, 0x0, 0x110b3140, 0x110b30c0, 0x10e27ec0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 6335 [select, 4 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x11f86180)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 3694 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477630, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111b1b38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111b1b38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111b1b00, 0x0, 0xb6477a30, 0x1125b9b0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7a800, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7a800, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10e6b5e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:60 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 4407 [select]:
github.com/hashicorp/consul/command/agent.(*Agent).sendCoordinate(0x10efca50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:612 +0x600
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:222 +0xe80

goroutine 4382 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10e828c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4374 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10fb54c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 3691 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c640, 0x8856d0, 0x5, 0x112914c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 4305 [IO wait]:
net.runtime_pollWait(0xb6476a00, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1114a0b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1114a0b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1114a080, 0x11944000, 0xffff, 0xffff, 0x118b2810, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e27770, 0x11944000, 0xffff, 0xffff, 0x118b2810, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e27770, 0x11944000, 0xffff, 0xffff, 0x0, 0x11944000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f3f950, 0x10e27770, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f3f9c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e27798, 0x10e27770, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f3f950, 0x10e27770, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3f950, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f3f950, 0x1122db00, 0x111abf40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3719 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477810, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111ab038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111ab038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111ab000, 0x0, 0xb6477a30, 0x1125b870)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e26f08, 0x2d8de0cc, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10e68ea0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 3688 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1114c640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4380 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x1156d4d0, 0x10f72f80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 3698 [select, 7 minutes]:
github.com/armon/go-metrics.(*InmemSignal).run(0x11269b00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:63 +0xc8
created by github.com/armon/go-metrics.NewInmemSignal
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:37 +0x1e4

goroutine 4125 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476cd0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1124f3b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1124f3b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1124f380, 0x0, 0xb6477a30, 0x11220a60)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111aeba8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111aeba8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x111ee090, 0xb64778a8, 0x111aeba8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x111ee090, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x111ee090, 0x1116fa70, 0x1123f4c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3735 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10e695f0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 3731 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x11291400)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 5220 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb3013d90, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110fb138, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110fb138, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110fb100, 0x0, 0xb6477a30, 0x1122ace0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112cacc0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112cacc0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11e945a0, 0xb64778a8, 0x112cacc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11e945a0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11e945a0, 0x112bf3e0, 0x111fba00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3629 [select, 7 minutes]:
github.com/armon/go-metrics.(*InmemSignal).run(0x111d85a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:63 +0xc8
created by github.com/armon/go-metrics.NewInmemSignal
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:37 +0x1e4

goroutine 3733 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb64776a8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111ab9b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111ab9b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111ab980, 0x0, 0xb6477a30, 0x111df190)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e27450, 0x1000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10e695f0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 4807 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb5431068, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10eacbf8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10eacbf8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10eacbc0, 0x0, 0xb6477a30, 0x1126b9c0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e27288, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10e27288, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x1156ce10, 0xb64778a8, 0x10e27288, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x1156ce10, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x1156ce10, 0x1116a8a0, 0x114bf280)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3714 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x10f38360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10f38360, 0x10e26e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4377 [IO wait]:
net.runtime_pollWait(0xb6476dc0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f72cb8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f72cb8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x10f72c80, 0x11bea000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x112337a0, 0x11bea000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x112337a0, 0x11bea000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x1156d4d0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 4390 [select]:
github.com/hashicorp/raft.(*Raft).leaderLoop(0x1162a6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:874 +0xce8
github.com/hashicorp/raft.(*Raft).runLeader(0x1162a6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:838 +0x8a0
github.com/hashicorp/raft.(*Raft).run(0x1162a6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:602 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a6c0, 0x10e279a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 5346 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013b38, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1124fc38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1124fc38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1124fc00, 0x0, 0xb6477a30, 0x110f9b00)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x1112c0c8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x1112c0c8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11745d40, 0xb64778a8, 0x1112c0c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11745d40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11745d40, 0x11087d10, 0x110f6580)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4398 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10f826c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 3726 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1114c500)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4394 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10f4ef40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 4386 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d680, 0x8856d0, 0x5, 0x1103c2c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 4422 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d7c0, 0x886c40, 0x5, 0x10f4f020)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 4376 [IO wait]:
net.runtime_pollWait(0xb6477360, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f72c78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f72c78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f72c40, 0x0, 0xb6477a30, 0x10f75da0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11233798, 0x9f5c38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x1156d4d0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 3727 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c500, 0x885e40, 0x6, 0x11290c20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 4428 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10f82e10)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 3628 [select, 7 minutes, locked to thread]:
runtime.gopark(0x9f63e0, 0x1108a78c, 0x88c588, 0x6, 0x18, 0x2)
	/usr/lib/go-1.6/src/runtime/proc.go:262 +0x148
runtime.selectgoImpl(0x1108a78c, 0x0, 0xc)
	/usr/lib/go-1.6/src/runtime/select.go:392 +0x1204
runtime.selectgo(0x1108a78c)
	/usr/lib/go-1.6/src/runtime/select.go:215 +0x10
runtime.ensureSigM.func1()
	/usr/lib/go-1.6/src/runtime/signal1_unix.go:279 +0x428
runtime.goexit()
	/usr/lib/go-1.6/src/runtime/asm_arm.s:990 +0x4

goroutine 4887 [IO wait]:
net.runtime_pollWait(0xb5430f00, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f710b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f710b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f71080, 0x12310000, 0xffff, 0xffff, 0x11301140, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112ca770, 0x12310000, 0xffff, 0xffff, 0x11301140, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112ca770, 0x12310000, 0xffff, 0xffff, 0x0, 0x12310000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116bddd0, 0x112ca770, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116bde48, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112ca7c0, 0x112ca770, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116bddd0, 0x112ca770, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116bddd0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116bddd0, 0x112bec00, 0x10f70f40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3687 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e695f0, 0x5f5e100, 0x0, 0x111c23c0, 0x111c2340, 0x10f24290)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 3692 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c640, 0x886c40, 0x5, 0x112914e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 4439 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10e6bdc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 4264 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476a78, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f78b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f78b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110f7880, 0x0, 0xb6477a30, 0x111ba060)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11233660, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x11233660, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x1156c120, 0xb64778a8, 0x11233660, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x1156c120, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x1156c120, 0x112be180, 0x10ec8f00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 5439 [IO wait]:
net.runtime_pollWait(0xb30139d0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11b4b3f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11b4b3f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x11b4b3c0, 0x12686000, 0xffff, 0xffff, 0x1164a930, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112cb818, 0x12686000, 0xffff, 0xffff, 0x1164a930, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112cb818, 0x12686000, 0xffff, 0xffff, 0x0, 0x12686000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11775cb0, 0x112cb818, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11775d28, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112cb830, 0x112cb818, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11775cb0, 0x112cb818, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11775cb0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11775cb0, 0x1116fda0, 0x11995e40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4372 [select, 1 minutes]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x1162a480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a480, 0x11233728)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4033 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477018, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1123f0f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1123f0f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1123f0c0, 0x0, 0xb6477a30, 0x1125b940)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11232ec8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x11232ec8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f51200, 0xb64778a8, 0x11232ec8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f51200, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f51200, 0x110d83f0, 0x111ab480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4437 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10e6bdc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 4319 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1114d680)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4402 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10e828c0, 0x886c40, 0x5, 0x10fb57c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 4385 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10e828c0, 0x8856d0, 0x5, 0x10fb57a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 4391 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x1162a6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a6c0, 0x10e279a8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 6330 [select, 4 minutes]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x124f5280, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x11f9ff40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 4418 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1114d7c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4205 [IO wait]:
net.runtime_pollWait(0xb6476910, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1102a238, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1102a238, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1102a200, 0x12516000, 0xffff, 0xffff, 0x110d94d0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f7a078, 0x12516000, 0xffff, 0xffff, 0x110d94d0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f7a078, 0x12516000, 0xffff, 0xffff, 0x0, 0x12516000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f83cb0, 0x10f7a078, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f83d28, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f7a100, 0x10f7a078, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f83cb0, 0x10f7a078, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f83cb0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f83cb0, 0x10f326f0, 0x10fe4140)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3685 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e695f0, 0x5f5e100, 0x0, 0x111c2380, 0x111c2340, 0x10f24280)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 3734 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477798, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111ab9f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111ab9f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x111ab9c0, 0x11448000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10e27458, 0x11448000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10e27458, 0x11448000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10e695f0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 3583 [select, 7 minutes]:
github.com/armon/go-metrics.(*InmemSignal).run(0x112699e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:63 +0xc8
created by github.com/armon/go-metrics.NewInmemSignal
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/inmem_signal.go:37 +0x1e4

goroutine 3729 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c500, 0x886c40, 0x5, 0x11290c60)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 3695 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10e6b5e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 3747 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477108, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c2878, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c2878, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111c2840, 0x0, 0xb6477a30, 0x11288480)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f242c8, 0x1, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/consul/command/agent.tcpKeepAliveListener.Accept(0x10f242c8, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http.go:177 +0x3c
net/http.(*Server).Serve(0x10fde050, 0xb542e5b8, 0x10f242c8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:2117 +0xfc
net/http.Serve(0xb542e5b8, 0x10f242c8, 0xb542e5d8, 0x111ad7a0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:1976 +0x80
created by github.com/hashicorp/consul/command/agent.NewHTTPServers
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http.go:141 +0x1364

goroutine 5491 [IO wait]:
net.runtime_pollWait(0xb3013868, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11c89378, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11c89378, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x11c89340, 0x126f6000, 0xffff, 0xffff, 0x10f32390, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f242f0, 0x126f6000, 0xffff, 0xffff, 0x10f32390, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f242f0, 0x126f6000, 0xffff, 0xffff, 0x0, 0x126f6000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11e94c60, 0x10f242f0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11e94cd8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f244c0, 0x10f242f0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11e94c60, 0x10f242f0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11e94c60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11e94c60, 0x113cce10, 0x11c89040)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3722 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10e68ea0, 0x5f5e100, 0x0, 0x111ab600, 0x111ab5c0, 0x10e27168)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 4306 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476988, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1123ffb8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1123ffb8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1123ff80, 0x0, 0xb6477a30, 0x112425b0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7a978, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7a978, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f3f9e0, 0xb64778a8, 0x10f7a978, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3f9e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f3f9e0, 0x1122db00, 0x111abf80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3728 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114c500, 0x8856d0, 0x5, 0x11290c40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 4263 [IO wait]:
net.runtime_pollWait(0xb6476af0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10ec9038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10ec9038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10ec9000, 0x11f5a000, 0xffff, 0xffff, 0x11271620, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x111ae418, 0x11f5a000, 0xffff, 0xffff, 0x11271620, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x111ae418, 0x11f5a000, 0xffff, 0xffff, 0x0, 0x11f5a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x1156c090, 0x111ae418, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x1156c108, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x111ae438, 0x111ae418, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x1156c090, 0x111ae418, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x1156c090, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x1156c090, 0x112be180, 0x10ec8ec0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3723 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10e68ea0, 0x111ab5c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 4388 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10e6a9a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 4086 [IO wait]:
net.runtime_pollWait(0xb6476eb0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110fa038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110fa038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110fa000, 0x1279c000, 0xffff, 0xffff, 0x1164ac90, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e26168, 0x1279c000, 0xffff, 0xffff, 0x1164ac90, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e26168, 0x1279c000, 0xffff, 0xffff, 0x0, 0x1279c000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f09170, 0x10e26168, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f091e8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e261a8, 0x10e26168, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f09170, 0x10e26168, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f09170, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f09170, 0x1105d8f0, 0x10f70780)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4946 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb5430d98, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f2f438, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f2f438, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f2f400, 0x0, 0xb6477a30, 0x112124b0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112ca580, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112ca580, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11774e10, 0xb64778a8, 0x112ca580, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11774e10, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11774e10, 0x1118aba0, 0x10e5b480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4206 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476898, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11012038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11012038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11012000, 0x0, 0xb6477a30, 0x110f8040)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e26870, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10e26870, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f83e60, 0xb64778a8, 0x10e26870, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f83e60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f83e60, 0x10f326f0, 0x10fe4180)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4438 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431c20, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f73578, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f73578, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f73540, 0x0, 0xb6477a30, 0x11243ae0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11233af8, 0x3799a4, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x11233af8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10e6bdc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:60 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 4403 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10e6a9a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 3732 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10e12a80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 6331 [runnable]:
github.com/hashicorp/raft.(*Raft).leaderLoop(0x11fa0fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:874 +0xce8
github.com/hashicorp/raft.(*Raft).runLeader(0x11fa0fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:838 +0x8a0
github.com/hashicorp/raft.(*Raft).run(0x11fa0fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:602 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x11fa0fc0, 0x111aeba0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 3762 [select]:
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10e6b5e0, 0x10e5b180)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:105 +0x47c
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 3716 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10e6b5e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 4317 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10f505a0, 0x111fa9c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 3703 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x110fcce0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x11180640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 4419 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1114d7c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4396 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431ba8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1114bef8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1114bef8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1114bec0, 0x0, 0xb6477a30, 0x1122a0a0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e27a10, 0x1, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10f826c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 3725 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1114c500)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4412 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x1120a250)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 4223 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476b68, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c2eb8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c2eb8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111c2e80, 0x0, 0xb6477a30, 0x1125b9e0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7b5d0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7b5d0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f83320, 0xb64778a8, 0x10f7b5d0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f83320, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f83320, 0x111d9ad0, 0x111c2a80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4413 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x11026f20, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x11650e10)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 4397 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431b30, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1114bf38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1114bf38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x1114bf00, 0x11490000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10e27a18, 0x11490000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10e27a18, 0x11490000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10f826c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 5583 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013700, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f2e538, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f2e538, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10f2e500, 0x0, 0xb6477a30, 0x10f4a760)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111afef0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111afef0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11d50cf0, 0xb64778a8, 0x111afef0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11d50cf0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11d50cf0, 0x10e11260, 0x1110cd00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3584 [sleep]:
time.Sleep(0x3b9aca00, 0x0)
	/usr/lib/go-1.6/src/runtime/time.go:59 +0x104
github.com/armon/go-metrics.(*Metrics).collectStats(0x11269a40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/metrics.go:67 +0x28
created by github.com/armon/go-metrics.New
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/armon/go-metrics/start.go:61 +0xbc

goroutine 4426 [IO wait]:
net.runtime_pollWait(0xb5431ab8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110b2db8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110b2db8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110b2d80, 0x0, 0xb6477a30, 0x10f75200)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e27cc8, 0x9f5c38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10f82e10)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 4434 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d900, 0x885e40, 0x6, 0x10f4f640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 9778 [select]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x10f3c340, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x11e29c70)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 4336 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x11242e50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 4404 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477540, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111b1bf8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111b1bf8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111b1bc0, 0x0, 0xb6477a30, 0x1126ae60)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112332b8, 0x3799a4, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112332b8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10e6a9a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:60 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 4389 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x1103daa0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 3746 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477720, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c27b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c27b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111c2780, 0x0, 0xb6477a30, 0x1126b730)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*UnixListener).AcceptUnix(0x111df3a0, 0x3, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/unixsock_posix.go:305 +0x4c
net.(*UnixListener).Accept(0x111df3a0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/unixsock_posix.go:315 +0x3c
github.com/hashicorp/consul/command/agent.(*AgentRPC).listen(0x111d9da0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/rpc.go:289 +0x50
created by github.com/hashicorp/consul/command/agent.NewAgentRPC
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/rpc.go:257 +0x29c

goroutine 4399 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f826c0, 0x5f5e100, 0x0, 0x110b28c0, 0x110b2800, 0x10e27c38)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 4316 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f505a0, 0x5f5e100, 0x0, 0x111faa00, 0x111fa9c0, 0x10e27948)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 5640 [IO wait]:
net.runtime_pollWait(0xb3013610, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x119d71f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x119d71f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x119d71c0, 0x12526000, 0xffff, 0xffff, 0x1111a120, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f7b7c8, 0x12526000, 0xffff, 0xffff, 0x1111a120, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f7b7c8, 0x12526000, 0xffff, 0xffff, 0x0, 0x12526000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116bd9e0, 0x10f7b7c8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116bda58, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f7b810, 0x10f7b7c8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116bd9e0, 0x10f7b7c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116bd9e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116bd9e0, 0x122f7680, 0x119d6480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3985 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477270, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1104c0b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1104c0b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1104c080, 0x0, 0xb6477a30, 0x1129e170)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10fa4040, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10fa4040, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f3f0e0, 0xb64778a8, 0x10fa4040, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3f0e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f3f0e0, 0x112005a0, 0x1104c840)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4370 [runnable]:
github.com/hashicorp/raft.(*Raft).leaderLoop(0x1162a480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:874 +0xce8
github.com/hashicorp/raft.(*Raft).runLeader(0x1162a480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:838 +0x8a0
github.com/hashicorp/raft.(*Raft).run(0x1162a480)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:602 +0xb8
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a480, 0x11233718)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4405 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10e6a9a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 3749 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476fa0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1114a178, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1114a178, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1114a140, 0x0, 0xb6477a30, 0x112884c0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111ae290, 0x457c70, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111ae290, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f3e750, 0xb64778a8, 0x111ae290, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f3e750, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f3e750, 0x111d9e60, 0x111c2980)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4314 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb64774c8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111fa7b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111fa7b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x111fa780, 0x115a0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10e276c0, 0x115a0000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10e276c0, 0x115a0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10f505a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 4806 [IO wait]:
net.runtime_pollWait(0xb54319c8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x114bf3f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x114bf3f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x114bf3c0, 0x11400000, 0xffff, 0xffff, 0x11c4acc0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112ca7d0, 0x11400000, 0xffff, 0xffff, 0x11c4acc0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112ca7d0, 0x11400000, 0xffff, 0xffff, 0x0, 0x11400000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x1156cd80, 0x112ca7d0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x1156cdf8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112ca830, 0x112ca7d0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x1156cd80, 0x112ca7d0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x1156cd80, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x1156cd80, 0x1116a8a0, 0x114bf240)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4384 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10e828c0, 0x885e40, 0x6, 0x10fb5780)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 4222 [IO wait]:
net.runtime_pollWait(0xb6476be0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c2e78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c2e78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x111c2e40, 0x12300000, 0xffff, 0xffff, 0x122f6d80, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f7b5a8, 0x12300000, 0xffff, 0xffff, 0x122f6d80, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f7b5a8, 0x12300000, 0xffff, 0xffff, 0x0, 0x12300000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10f83200, 0x10f7b5a8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10f83278, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f7b5c0, 0x10f7b5a8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10f83200, 0x10f7b5a8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10f83200, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10f83200, 0x111d9ad0, 0x111c29c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5345 [IO wait]:
net.runtime_pollWait(0xb3013bb0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f66f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f66f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110f66c0, 0x12076000, 0xffff, 0xffff, 0x111d8f30, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112cb4e8, 0x12076000, 0xffff, 0xffff, 0x111d8f30, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112cb4e8, 0x12076000, 0xffff, 0xffff, 0x0, 0x12076000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11745b90, 0x112cb4e8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11745c08, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112cb538, 0x112cb4e8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11745b90, 0x112cb4e8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11745b90, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11745b90, 0x11087d10, 0x110f6540)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4420 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d7c0, 0x885e40, 0x6, 0x10f4efe0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 4087 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6476e38, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f60f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f60f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110f60c0, 0x0, 0xb6477a30, 0x1120a120)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10fa4050, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10fa4050, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f09320, 0xb64778a8, 0x10fa4050, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f09320, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f09320, 0x1105d8f0, 0x10f70800)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 5440 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013958, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1193c378, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1193c378, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1193c340, 0x0, 0xb6477a30, 0x110f8730)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f244a8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f244a8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11775dd0, 0xb64778a8, 0x10f244a8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11775dd0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11775dd0, 0x1116fda0, 0x11995e80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 3799 [select, 7 minutes]:
github.com/hashicorp/consul/command/agent.(*scadaListener).Accept(0x11017900, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/scada.go:134 +0x138
net/http.(*Server).Serve(0x10fde280, 0xb30122b8, 0x11017900, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:2117 +0xfc
net/http.Serve(0xb30122b8, 0x11017900, 0xb542e5d8, 0x11017940, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:1976 +0x80
created by github.com/hashicorp/consul/command/agent.newScadaHttp
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http.go:165 +0x204

goroutine 4174 [IO wait]:
net.runtime_pollWait(0xb64773d8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110b2a78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110b2a78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110b2a40, 0x11ca2000, 0xffff, 0xffff, 0x118b32c0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e26f48, 0x11ca2000, 0xffff, 0xffff, 0x118b32c0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e26f48, 0x11ca2000, 0xffff, 0xffff, 0x0, 0x11ca2000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10e68c60, 0x10e26f48, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10e68cd8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e26f60, 0x10e26f48, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10e68c60, 0x10e26f48, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10e68c60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10e68c60, 0x1118a600, 0x110b2940)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5219 [IO wait]:
net.runtime_pollWait(0xb3013e08, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110faf78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110faf78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110faf40, 0x11a9c000, 0xffff, 0xffff, 0x124ea390, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112cac70, 0x11a9c000, 0xffff, 0xffff, 0x124ea390, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112cac70, 0x11a9c000, 0xffff, 0xffff, 0x0, 0x11a9c000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11e94510, 0x112cac70, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11e94588, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112cac98, 0x112cac70, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11e94510, 0x112cac70, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11e94510, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11e94510, 0x112bf3e0, 0x111fb9c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 3702 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x1119da50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 4335 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x111c7d40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 3701 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x11269e90)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 3798 [select, 1 minutes]:
github.com/hashicorp/scada-client.(*Provider).wait(0x10f47440)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/scada-client/provider.go:214 +0x148
github.com/hashicorp/scada-client.(*Provider).run(0x10f47440)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/scada-client/provider.go:226 +0x68
created by github.com/hashicorp/scada-client.NewProvider
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/scada-client/provider.go:158 +0x240

goroutine 4373 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10e6a9a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 4888 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb5430e88, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f7678, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f7678, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110f7640, 0x0, 0xb6477a30, 0x111b5ed0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111ae430, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111ae430, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116bde60, 0xb64778a8, 0x111ae430, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116bde60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116bde60, 0x112bec00, 0x10f70f80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6339 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb54316f8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x119932f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x119932f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x119932c0, 0x0, 0xb6477a30, 0x1129f150)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*UnixListener).AcceptUnix(0x1129be00, 0x122f85d8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/unixsock_posix.go:305 +0x4c
net.(*UnixListener).Accept(0x1129be00, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/unixsock_posix.go:315 +0x3c
net/http.(*Server).Serve(0x122f85a0, 0xb647f9c8, 0x1129be00, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:2117 +0xfc
net/http.Serve(0xb647f9c8, 0x1129be00, 0xb542e5d8, 0x120a4e80, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/http/server.go:1976 +0x80
created by github.com/hashicorp/consul/command/agent.NewHTTPServers
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http.go:141 +0x1364

goroutine 4395 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x11028380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 4392 [select, 1 minutes]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x1162a6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x1162a6c0, 0x10e279b0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 4321 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d680, 0x885e40, 0x6, 0x1103c2a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 3938 [IO wait]:
net.runtime_pollWait(0xb64772e8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f72238, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f72238, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f72200, 0x11f4a000, 0xffff, 0xffff, 0x11301830, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa4728, 0x11f4a000, 0xffff, 0xffff, 0x11301830, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa4728, 0x11f4a000, 0xffff, 0xffff, 0x0, 0x11f4a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x10e68fc0, 0x10fa4728, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x10e69038, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa4740, 0x10fa4728, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x10e68fc0, 0x10fa4728, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x10e68fc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x10e68fc0, 0x1122d560, 0x111fa4c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4414 [select]:
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10e6a9a0, 0x111fb940)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:105 +0x47c
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 4337 [select, 7 minutes]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x111d6f80, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x112a7e50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 4383 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10e828c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4378 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x1156d4d0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 4401 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10f826c0, 0x5f5e100, 0x0, 0x110b29c0, 0x110b2800, 0x10e27c48)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 4387 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d680, 0x886c40, 0x5, 0x1103c2e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 6354 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb54318d8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x120c8eb8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x120c8eb8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x120c8e80, 0x118e0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x111aed40, 0x118e0000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x111aed40, 0x118e0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x11c48630)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 4313 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb6477180, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111fa778, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111fa778, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111fa740, 0x0, 0xb6477a30, 0x111de730)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10e276b8, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10f505a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 4432 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1114d900)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 4315 [select, 7 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10f505a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 4436 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1114d900, 0x886c40, 0x5, 0x10f4f680)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 4433 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1114d900)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 4411 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x1122d7a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 4424 [select, 7 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x10f4f5a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 4400 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10f826c0, 0x110b2800)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 6357 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x11c48630, 0x120c9100)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 5308 [IO wait]:
net.runtime_pollWait(0xb3013ca0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1102b6f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1102b6f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1102b6c0, 0x11c1a000, 0xffff, 0xffff, 0x112031a0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10e27b58, 0x11c1a000, 0xffff, 0xffff, 0x112031a0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10e27b58, 0x11c1a000, 0xffff, 0xffff, 0x0, 0x11c1a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116bf320, 0x10e27b58, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116bf398, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10e27ba8, 0x10e27b58, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116bf320, 0x10e27b58, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116bf320, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116bf320, 0x1164a540, 0x11992580)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5641 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013598, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11ebd6f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11ebd6f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11ebd6c0, 0x0, 0xb6477a30, 0x11221890)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10fa5760, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10fa5760, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116bda70, 0xb64778a8, 0x10fa5760, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116bda70, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116bda70, 0x122f7680, 0x119d6980)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 5042 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb30140d8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10e5a138, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10e5a138, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10e5a100, 0x0, 0xb6477a30, 0x111ba390)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112cb3b0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112cb3b0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116be870, 0xb64778a8, 0x112cb3b0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116be870, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116be870, 0x11d24a50, 0x1110dd80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4720 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb5431248, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1124e0b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1124e0b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x1124e080, 0x0, 0xb6477a30, 0x1119df80)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112ca730, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112ca730, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116be630, 0xb64778a8, 0x112ca730, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116be630, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116be630, 0x111b3740, 0x114bff00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4853 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb5430f78, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10fe4138, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10fe4138, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10fe4100, 0x0, 0xb6477a30, 0x1129bf40)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112325a8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112325a8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116bf9e0, 0xb64778a8, 0x112325a8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116bf9e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116bf9e0, 0x1118ac90, 0x10f1e700)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4519 [IO wait]:
net.runtime_pollWait(0xb5431680, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x112485f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x112485f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x112485c0, 0x12174000, 0xffff, 0xffff, 0x11268030, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa45c0, 0x12174000, 0xffff, 0xffff, 0x11268030, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa45c0, 0x12174000, 0xffff, 0xffff, 0x0, 0x12174000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x111eef30, 0x10fa45c0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x111eefa8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa45d8, 0x10fa45c0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x111eef30, 0x10fa45c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x111eef30, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x111eef30, 0x111c7080, 0x111c2b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4520 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431608, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x111c31b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x111c31b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x111c3180, 0x0, 0xb6477a30, 0x1126ae20)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7a778, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7a778, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x111eefc0, 0xb64778a8, 0x10f7a778, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x111eefc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x111eefc0, 0x111c7080, 0x111c2c40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4852 [IO wait]:
net.runtime_pollWait(0xb5430ff0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f1fab8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f1fab8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f1fa80, 0x11cb2000, 0xffff, 0xffff, 0x118b3320, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x111ae908, 0x11cb2000, 0xffff, 0xffff, 0x118b3320, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x111ae908, 0x11cb2000, 0xffff, 0xffff, 0x0, 0x11cb2000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116bf950, 0x111ae908, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116bf9c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x111ae930, 0x111ae908, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116bf950, 0x111ae908, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116bf950, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116bf950, 0x1118ac90, 0x10f1e6c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 6334 [select, 4 minutes]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x11874380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 6336 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x111c5080)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 9649 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x10fa1900)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 5105 [IO wait]:
net.runtime_pollWait(0xb3013fe8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x119d79f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x119d79f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x119d79c0, 0x11aac000, 0xffff, 0xffff, 0x118b33b0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112cab40, 0x11aac000, 0xffff, 0xffff, 0x118b33b0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112cab40, 0x11aac000, 0xffff, 0xffff, 0x0, 0x11aac000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x111efe60, 0x112cab40, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x111efed8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112cab60, 0x112cab40, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x111efe60, 0x112cab40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x111efe60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x111efe60, 0x1164b380, 0x119d73c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5582 [IO wait]:
net.runtime_pollWait(0xb3013688, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1110d478, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1110d478, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1110d440, 0x115d6000, 0xffff, 0xffff, 0x113cd740, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f24b08, 0x115d6000, 0xffff, 0xffff, 0x113cd740, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f24b08, 0x115d6000, 0xffff, 0xffff, 0x0, 0x115d6000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11d50c60, 0x10f24b08, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11d50cd8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f24b20, 0x10f24b08, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11d50c60, 0x10f24b08, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11d50c60, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11d50c60, 0x10e11260, 0x1110cc80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 6332 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x11fa0fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x11fa0fc0, 0x111aebb8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 9647 [runnable]:
syscall.Syscall(0x94, 0x3e, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/syscall/asm_linux_arm.s:17 +0x8
syscall.Fdatasync(0x3e, 0x0, 0x0)
	/usr/lib/go-1.6/src/syscall/zsyscall_linux_arm.go:472 +0x44
github.com/boltdb/bolt.fdatasync(0x113c1560, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/bolt_linux.go:9 +0x40
github.com/boltdb/bolt.(*DB).init(0x113c1560, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/db.go:382 +0x3b0
github.com/boltdb/bolt.Open(0x11679a60, 0x20, 0x180, 0xc99820, 0x2, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/db.go:199 +0x31c
github.com/hashicorp/raft-boltdb.NewBoltStore(0x11679a60, 0x20, 0x2, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:39 +0x44
github.com/hashicorp/consul/consul.(*Server).setupRaft(0x12334460, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:369 +0xacc
github.com/hashicorp/consul/consul.NewServer(0x113cfd10, 0x113cfd10, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:249 +0xad8
github.com/hashicorp/consul/command/agent.(*Agent).setupServer(0x113ceff0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:375 +0x120
github.com/hashicorp/consul/command/agent.Create(0x11ceec00, 0xb64763b0, 0x10e260d0, 0x5, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:190 +0xb80
github.com/hashicorp/consul/command/agent.makeAgentLog(0x113de660, 0x11ceec00, 0x0, 0x0, 0x0, 0x0, 0x1)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent_test.go:74 +0x188
github.com/hashicorp/consul/command/agent.makeAgent(0x113de660, 0x11ceec00, 0x0, 0x0, 0x11e2f180)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent_test.go:109 +0x40
github.com/hashicorp/consul/command/agent.makeHTTPServerWithConfig(0x113de660, 0x0, 0x0, 0x0, 0xc0001)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http_test.go:36 +0x70
github.com/hashicorp/consul/command/agent.httpTestWithConfig(0x113de660, 0x11a1ffa0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http_test.go:688 +0x24
github.com/hashicorp/consul/command/agent.httpTest(0x113de660, 0x11a1ffa0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/http_test.go:684 +0x2c
github.com/hashicorp/consul/command/agent.TestSessionList(0x113de660)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/session_endpoint_test.go:403 +0x34
testing.tRunner(0x113de660, 0xc88ae0)
	/usr/lib/go-1.6/src/testing/testing.go:473 +0xa8
created by testing.RunTests
	/usr/lib/go-1.6/src/testing/testing.go:582 +0x600

goroutine 5492 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb30138e0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x114be9f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x114be9f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x114be9c0, 0x0, 0xb6477a30, 0x11056550)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7b110, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7b110, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11e94d80, 0xb64778a8, 0x10f7b110, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11e94d80, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11e94d80, 0x113cce10, 0x11c89080)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6358 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x11c48630, 0x5f5e100, 0x0, 0x120c91c0, 0x120c9100, 0x111af468)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 6360 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1164fb80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 5041 [IO wait]:
net.runtime_pollWait(0xb3014150, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f043f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f043f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f043c0, 0x116ac000, 0xffff, 0xffff, 0x1256cff0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x1112cca8, 0x116ac000, 0xffff, 0xffff, 0x1256cff0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x1112cca8, 0x116ac000, 0xffff, 0xffff, 0x0, 0x116ac000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11745ef0, 0x1112cca8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11745f68, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x1112ccc8, 0x1112cca8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11745ef0, 0x1112cca8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11745ef0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11745ef0, 0x11d24a50, 0x1110dd00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4945 [IO wait]:
net.runtime_pollWait(0xb5430e10, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1110d3b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1110d3b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1110d380, 0x12244000, 0xffff, 0xffff, 0x11d25e60, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f7bcd0, 0x12244000, 0xffff, 0xffff, 0x11d25e60, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f7bcd0, 0x12244000, 0xffff, 0xffff, 0x0, 0x12244000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11774d80, 0x10f7bcd0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11774df8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f7bce8, 0x10f7bcd0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11774d80, 0x10f7bcd0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11774d80, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11774d80, 0x1118aba0, 0x10e5b380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4634 [IO wait]:
net.runtime_pollWait(0xb54314a0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f6538, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f6538, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110f6500, 0x11f3a000, 0xffff, 0xffff, 0x11271500, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112ca208, 0x11f3a000, 0xffff, 0xffff, 0x11271500, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112ca208, 0x11f3a000, 0xffff, 0xffff, 0x0, 0x11f3a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x116bd7a0, 0x112ca208, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x116bd818, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112ca228, 0x112ca208, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x116bd7a0, 0x112ca208, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x116bd7a0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x116bd7a0, 0x1111a6c0, 0x110b2f80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4635 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431428, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110b32f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110b32f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110b32c0, 0x0, 0xb6477a30, 0x11221100)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7b170, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7b170, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116bd830, 0xb64778a8, 0x10f7b170, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116bd830, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116bd830, 0x1111a6c0, 0x110b2fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 4669 [IO wait]:
net.runtime_pollWait(0xb54313b0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1124e3f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1124e3f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1124e3c0, 0x11c2a000, 0xffff, 0xffff, 0x11202630, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa4d18, 0x11c2a000, 0xffff, 0xffff, 0x11202630, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa4d18, 0x11c2a000, 0xffff, 0xffff, 0x0, 0x11c2a000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x111ef320, 0x10fa4d18, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x111ef398, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa4d58, 0x10fa4d18, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x111ef320, 0x10fa4d18, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x111ef320, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x111ef320, 0x11087f80, 0x1126d700)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4670 [IO wait, 7 minutes]:
net.runtime_pollWait(0xb5431338, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10eac6b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10eac6b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10eac680, 0x0, 0xb6477a30, 0x11243620)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112ca1b8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112ca1b8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x111ef5f0, 0xb64778a8, 0x112ca1b8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x111ef5f0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x111ef5f0, 0x11087f80, 0x1126dac0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6333 [select, 2 minutes]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x11fa0fc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x11fa0fc0, 0x111aebc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 5106 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb3014060, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11994838, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11994838, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11994800, 0x0, 0xb6477a30, 0x11002d20)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7a1c0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7a1c0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x10f08cf0, 0xb64778a8, 0x10f7a1c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x10f08cf0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x10f08cf0, 0x1164b380, 0x119d7400)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6356 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x11c48630, 0x5f5e100, 0x0, 0x120c9180, 0x120c9100, 0x111af440)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 4774 [IO wait]:
net.runtime_pollWait(0xb5431158, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x114dae38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x114dae38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x114dae00, 0x12696000, 0xffff, 0xffff, 0x1164a9c0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x112ca1d0, 0x12696000, 0xffff, 0xffff, 0x1164a9c0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x112ca1d0, 0x12696000, 0xffff, 0xffff, 0x0, 0x12696000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x1156c990, 0x112ca1d0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x1156ca08, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x112ca1f0, 0x112ca1d0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x1156c990, 0x112ca1d0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x1156c990, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x1156c990, 0x11268b10, 0x110c2d00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 4775 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb54310e0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x114bec38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x114bec38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x114bec00, 0x0, 0xb6477a30, 0x1129ede0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112322c0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112322c0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x1156cab0, 0xb64778a8, 0x112322c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x1156cab0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x1156cab0, 0x11268b10, 0x110c2d40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6362 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fb80, 0x8856d0, 0x5, 0x11f86280)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 5309 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb3013c28, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11b4a738, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11b4a738, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11b4a700, 0x0, 0xb6477a30, 0x112ba1f0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x11232060, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x11232060, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x116bf4d0, 0xb64778a8, 0x11232060, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x116bf4d0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x116bf4d0, 0x1164a540, 0x11992640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 5002 [IO wait]:
net.runtime_pollWait(0xb3014240, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110fa078, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110fa078, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110fa040, 0x119e8000, 0xffff, 0xffff, 0x11c4bf80, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10fa47f8, 0x119e8000, 0xffff, 0xffff, 0x11c4bf80, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10fa47f8, 0x119e8000, 0xffff, 0xffff, 0x0, 0x119e8000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11744f30, 0x10fa47f8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11744fa8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10fa4910, 0x10fa47f8, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11744f30, 0x10fa47f8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11744f30, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11744f30, 0x1111b020, 0x111fbb40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5003 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb30141c8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10eadab8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10eadab8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10eada80, 0x0, 0xb6477a30, 0x11220990)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x1112c250, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x1112c250, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11744fc0, 0xb64778a8, 0x1112c250, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11744fc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11744fc0, 0x1111b020, 0x111fbe00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6361 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fb80, 0x885e40, 0x6, 0x11f86260)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 5142 [IO wait]:
net.runtime_pollWait(0xb3013ef8, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x1126d378, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x1126d378, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x1126d340, 0x114c0000, 0xffff, 0xffff, 0x118b28a0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x111af428, 0x114c0000, 0xffff, 0xffff, 0x118b28a0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x111af428, 0x114c0000, 0xffff, 0xffff, 0x0, 0x114c0000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11c9a090, 0x111af428, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11c9a108, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x111af448, 0x111af428, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11c9a090, 0x111af428, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11c9a090, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11c9a090, 0x11086150, 0x1126c800)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5143 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb3013e80, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10ead9b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10ead9b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x10ead980, 0x0, 0xb6477a30, 0x10f74780)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x112cb0f0, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x112cb0f0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11c9a120, 0xb64778a8, 0x112cb0f0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11c9a120, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11c9a120, 0x11086150, 0x1126c840)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6355 [select, 4 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x11c48630)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 5260 [IO wait]:
net.runtime_pollWait(0xb3013d18, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10f2e038, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10f2e038, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10f2e000, 0x12748000, 0xffff, 0xffff, 0x124eb9e0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x1112c018, 0x12748000, 0xffff, 0xffff, 0x124eb9e0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x1112c018, 0x12748000, 0xffff, 0xffff, 0x0, 0x12748000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11e95830, 0x1112c018, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11e958a8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x1112c0b8, 0x1112c018, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11e95830, 0x1112c018, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11e95830, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11e95830, 0x11300390, 0x10f2f380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5261 [IO wait, 6 minutes]:
net.runtime_pollWait(0xb54317e8, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11b4a1f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11b4a1f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11b4a1c0, 0x0, 0xb6477a30, 0x11064af0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111ae078, 0x457c70, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111ae078, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11e958c0, 0xb64778a8, 0x111ae078, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11e958c0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11e958c0, 0x11300390, 0x10f2f3c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6363 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fb80, 0x886c40, 0x5, 0x11f862a0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 5384 [IO wait]:
net.runtime_pollWait(0xb3013ac0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10eae578, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10eae578, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x10eae540, 0x126d6000, 0xffff, 0xffff, 0x1111bd40, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x111af088, 0x126d6000, 0xffff, 0xffff, 0x1111bd40, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x111af088, 0x126d6000, 0xffff, 0xffff, 0x0, 0x126d6000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11d18d80, 0x111af088, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11d18df8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x111af0a0, 0x111af088, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11d18d80, 0x111af088, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11d18d80, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11d18d80, 0x1105c9f0, 0x10eae3c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5385 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013a48, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x114da2f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x114da2f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x114da2c0, 0x0, 0xb6477a30, 0x110259e0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7bc40, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7bc40, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11d18e10, 0xb64778a8, 0x10f7bc40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11d18e10, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11d18e10, 0x1105c9f0, 0x10eae400)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 5528 [IO wait]:
net.runtime_pollWait(0xb30137f0, 0x72, 0x70)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110f7d38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110f7d38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readMsg(0x110f7d00, 0x1278c000, 0xffff, 0xffff, 0x113009c0, 0x28, 0x28, 0xffffffff, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/fd_unix.go:304 +0x274
net.(*UDPConn).ReadMsgUDP(0x10f24bf0, 0x1278c000, 0xffff, 0xffff, 0x113009c0, 0x28, 0x28, 0x28, 0x736698, 0x1, ...)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:96 +0x130
github.com/miekg/dns.ReadFromSessionUDP(0x10f24bf0, 0x1278c000, 0xffff, 0xffff, 0x0, 0x1278c000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/udp.go:47 +0x98
github.com/miekg/dns.(*Server).readUDP(0x11d50750, 0x10f24bf0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x11d507c8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:646 +0x200
github.com/miekg/dns.(*defaultReader).ReadUDP(0x10f24c08, 0x10f24bf0, 0x77359400, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:253 +0x58
github.com/miekg/dns.(*Server).serveUDP(0x11d50750, 0x10f24bf0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:519 +0x214
github.com/miekg/dns.(*Server).ListenAndServe(0x11d50750, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:370 +0x674
github.com/hashicorp/consul/command/agent.NewDNSServer.func1(0x11d50750, 0x1105c630, 0x110f7680)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:108 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:112 +0x97c

goroutine 5529 [IO wait, 5 minutes]:
net.runtime_pollWait(0xb3013778, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110b3578, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110b3578, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x110b3540, 0x0, 0xb6477a30, 0x112aaed0)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111103a8, 0x69bf0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x111103a8, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/miekg/dns.(*Server).serveTCP(0x11d507e0, 0xb64778a8, 0x111103a8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:476 +0x200
github.com/miekg/dns.(*Server).ListenAndServe(0x11d507e0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/miekg/dns/server.go:334 +0x414
github.com/hashicorp/consul/command/agent.NewDNSServer.func2(0x11d507e0, 0x1105c630, 0x110f7740)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:116 +0x1c
created by github.com/hashicorp/consul/command/agent.NewDNSServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/dns.go:120 +0x9c8

goroutine 6328 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x11d24660)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 6364 [select, 4 minutes]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x11874380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 6365 [select, 4 minutes]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x11f86a60)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 6366 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x111c5100)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 6367 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb5431860, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x120c9a38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x120c9a38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x120c9a00, 0x0, 0xb6477a30, 0x10f4bf90)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x111af800, 0xc0001, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x11c48b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 6368 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb5431770, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x120c9a78, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x120c9a78, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x120c9a40, 0x118f0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb6472030, 0x10e0a07c)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x111af808, 0x118f0000, 0x10000, 0x10000, 0x736698, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x111af808, 0x118f0000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x11c48b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 6369 [select, 4 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x11c48b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 6370 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x11c48b40, 0x5f5e100, 0x0, 0x120c9e80, 0x120c9e00, 0x111afbe8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 6371 [select, 1 minutes]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x11c48b40, 0x120c9e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:142 +0x1b0
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 6372 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x11c48b40, 0x5f5e100, 0x0, 0x120c9ec0, 0x120c9e00, 0x111afbf8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 6373 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x1164fcc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 6374 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x1164fcc0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 6375 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fcc0, 0x885e40, 0x6, 0x11f86b00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 6376 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fcc0, 0x8856d0, 0x5, 0x11f86b20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 6377 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x1164fcc0, 0x886c40, 0x5, 0x11f86b40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 6378 [select, 4 minutes]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x11874380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 6379 [IO wait, 4 minutes]:
net.runtime_pollWait(0xb5431c98, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x122fe5f8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x122fe5f8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x122fe5c0, 0x0, 0xb6477a30, 0x110f8170)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10f7bb00, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10f7bb00, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x11874380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:60 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 6380 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x11874380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 6381 [select, 4 minutes]:
github.com/hashicorp/consul/command/agent.(*Agent).handleEvents(0x113cea50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/user_event.go:111 +0x2a0
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:218 +0xe50

goroutine 6382 [select]:
github.com/hashicorp/consul/command/agent.(*Agent).sendCoordinate(0x113cea50)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:612 +0x600
created by github.com/hashicorp/consul/command/agent.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/command/agent/agent.go:222 +0xe80

goroutine 6383 [select]:
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x11874380, 0x1193cc80)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:105 +0x47c
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 9648 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x10f23f20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

trap    0x0
error   0x0
oldmask 0x0
r0      0xc8a89c
r1      0x0
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x1c28a1
r7      0xf0
r8      0x191430
r9      0x0
r10     0xc8a2e0
fp      0xc8a194
ip      0x10e1c2f4
sp      0xbee773bc
lr      0x3daa0
pc      0x73888
cpsr    0xa0000010
fault   0x0
*** Test killed with quit: ran too long (10m0s).
FAIL	github.com/hashicorp/consul/command/agent	600.214s
=== RUN   TestACLEndpoint_Apply
2016/06/23 07:44:12 [INFO] raft: Node at 127.0.0.1:15001 [Follower] entering Follower state
2016/06/23 07:44:12 [INFO] serf: EventMemberJoin: Node 15000 127.0.0.1
2016/06/23 07:44:12 [INFO] consul: adding LAN server Node 15000 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/06/23 07:44:12 [INFO] serf: EventMemberJoin: Node 15000.dc1 127.0.0.1
2016/06/23 07:44:12 [INFO] consul: adding WAN server Node 15000.dc1 (Addr: 127.0.0.1:15001) (DC: dc1)
2016/06/23 07:44:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:12 [INFO] raft: Node at 127.0.0.1:15001 [Candidate] entering Candidate state
2016/06/23 07:44:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:13 [DEBUG] raft: Vote granted from 127.0.0.1:15001. Tally: 1
2016/06/23 07:44:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:13 [INFO] raft: Node at 127.0.0.1:15001 [Leader] entering Leader state
2016/06/23 07:44:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:13 [INFO] consul: New leader elected: Node 15000
2016/06/23 07:44:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:13 [DEBUG] raft: Node 127.0.0.1:15001 updated peer set (2): [127.0.0.1:15001]
2016/06/23 07:44:13 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:14 [INFO] consul: member 'Node 15000' joined, marking health alive
2016/06/23 07:44:14 [INFO] consul: shutting down server
2016/06/23 07:44:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:15 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:15 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply (3.41s)
=== RUN   TestACLEndpoint_Update_PurgeCache
2016/06/23 07:44:15 [INFO] raft: Node at 127.0.0.1:15005 [Follower] entering Follower state
2016/06/23 07:44:15 [INFO] serf: EventMemberJoin: Node 15004 127.0.0.1
2016/06/23 07:44:15 [INFO] consul: adding LAN server Node 15004 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/06/23 07:44:15 [INFO] serf: EventMemberJoin: Node 15004.dc1 127.0.0.1
2016/06/23 07:44:15 [INFO] consul: adding WAN server Node 15004.dc1 (Addr: 127.0.0.1:15005) (DC: dc1)
2016/06/23 07:44:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:15 [INFO] raft: Node at 127.0.0.1:15005 [Candidate] entering Candidate state
2016/06/23 07:44:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:16 [DEBUG] raft: Vote granted from 127.0.0.1:15005. Tally: 1
2016/06/23 07:44:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:16 [INFO] raft: Node at 127.0.0.1:15005 [Leader] entering Leader state
2016/06/23 07:44:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:16 [INFO] consul: New leader elected: Node 15004
2016/06/23 07:44:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:16 [DEBUG] raft: Node 127.0.0.1:15005 updated peer set (2): [127.0.0.1:15005]
2016/06/23 07:44:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:17 [INFO] consul: member 'Node 15004' joined, marking health alive
2016/06/23 07:44:18 [INFO] consul: shutting down server
2016/06/23 07:44:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:18 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Update_PurgeCache (3.66s)
=== RUN   TestACLEndpoint_Apply_CustomID
2016/06/23 07:44:19 [INFO] raft: Node at 127.0.0.1:15009 [Follower] entering Follower state
2016/06/23 07:44:19 [INFO] serf: EventMemberJoin: Node 15008 127.0.0.1
2016/06/23 07:44:19 [INFO] consul: adding LAN server Node 15008 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/06/23 07:44:19 [INFO] serf: EventMemberJoin: Node 15008.dc1 127.0.0.1
2016/06/23 07:44:19 [INFO] consul: adding WAN server Node 15008.dc1 (Addr: 127.0.0.1:15009) (DC: dc1)
2016/06/23 07:44:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:19 [INFO] raft: Node at 127.0.0.1:15009 [Candidate] entering Candidate state
2016/06/23 07:44:20 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:20 [DEBUG] raft: Vote granted from 127.0.0.1:15009. Tally: 1
2016/06/23 07:44:20 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:20 [INFO] raft: Node at 127.0.0.1:15009 [Leader] entering Leader state
2016/06/23 07:44:20 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:20 [INFO] consul: New leader elected: Node 15008
2016/06/23 07:44:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:20 [DEBUG] raft: Node 127.0.0.1:15009 updated peer set (2): [127.0.0.1:15009]
2016/06/23 07:44:21 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:21 [INFO] consul: member 'Node 15008' joined, marking health alive
2016/06/23 07:44:21 [INFO] consul: shutting down server
2016/06/23 07:44:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:22 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply_CustomID (3.33s)
=== RUN   TestACLEndpoint_Apply_Denied
2016/06/23 07:44:22 [INFO] raft: Node at 127.0.0.1:15013 [Follower] entering Follower state
2016/06/23 07:44:22 [INFO] serf: EventMemberJoin: Node 15012 127.0.0.1
2016/06/23 07:44:22 [INFO] serf: EventMemberJoin: Node 15012.dc1 127.0.0.1
2016/06/23 07:44:22 [INFO] consul: adding LAN server Node 15012 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/06/23 07:44:22 [INFO] consul: adding WAN server Node 15012.dc1 (Addr: 127.0.0.1:15013) (DC: dc1)
2016/06/23 07:44:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:22 [INFO] raft: Node at 127.0.0.1:15013 [Candidate] entering Candidate state
2016/06/23 07:44:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:23 [DEBUG] raft: Vote granted from 127.0.0.1:15013. Tally: 1
2016/06/23 07:44:23 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:23 [INFO] raft: Node at 127.0.0.1:15013 [Leader] entering Leader state
2016/06/23 07:44:23 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:23 [INFO] consul: New leader elected: Node 15012
2016/06/23 07:44:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:23 [DEBUG] raft: Node 127.0.0.1:15013 updated peer set (2): [127.0.0.1:15013]
2016/06/23 07:44:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:23 [INFO] consul: member 'Node 15012' joined, marking health alive
2016/06/23 07:44:23 [INFO] consul: shutting down server
2016/06/23 07:44:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:24 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_Apply_Denied (1.84s)
=== RUN   TestACLEndpoint_Apply_DeleteAnon
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:15017 [Follower] entering Follower state
2016/06/23 07:44:24 [INFO] serf: EventMemberJoin: Node 15016 127.0.0.1
2016/06/23 07:44:24 [INFO] consul: adding LAN server Node 15016 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/06/23 07:44:24 [INFO] serf: EventMemberJoin: Node 15016.dc1 127.0.0.1
2016/06/23 07:44:24 [INFO] consul: adding WAN server Node 15016.dc1 (Addr: 127.0.0.1:15017) (DC: dc1)
2016/06/23 07:44:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:15017 [Candidate] entering Candidate state
2016/06/23 07:44:24 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:24 [DEBUG] raft: Vote granted from 127.0.0.1:15017. Tally: 1
2016/06/23 07:44:24 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:24 [INFO] raft: Node at 127.0.0.1:15017 [Leader] entering Leader state
2016/06/23 07:44:24 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:24 [INFO] consul: New leader elected: Node 15016
2016/06/23 07:44:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:25 [DEBUG] raft: Node 127.0.0.1:15017 updated peer set (2): [127.0.0.1:15017]
2016/06/23 07:44:25 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:25 [INFO] consul: member 'Node 15016' joined, marking health alive
2016/06/23 07:44:25 [INFO] consul: shutting down server
2016/06/23 07:44:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:25 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:25 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_DeleteAnon (1.87s)
=== RUN   TestACLEndpoint_Apply_RootChange
2016/06/23 07:44:26 [INFO] raft: Node at 127.0.0.1:15021 [Follower] entering Follower state
2016/06/23 07:44:26 [INFO] serf: EventMemberJoin: Node 15020 127.0.0.1
2016/06/23 07:44:26 [INFO] consul: adding LAN server Node 15020 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/06/23 07:44:26 [INFO] serf: EventMemberJoin: Node 15020.dc1 127.0.0.1
2016/06/23 07:44:26 [INFO] consul: adding WAN server Node 15020.dc1 (Addr: 127.0.0.1:15021) (DC: dc1)
2016/06/23 07:44:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:26 [INFO] raft: Node at 127.0.0.1:15021 [Candidate] entering Candidate state
2016/06/23 07:44:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:26 [DEBUG] raft: Vote granted from 127.0.0.1:15021. Tally: 1
2016/06/23 07:44:26 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:26 [INFO] raft: Node at 127.0.0.1:15021 [Leader] entering Leader state
2016/06/23 07:44:26 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:26 [INFO] consul: New leader elected: Node 15020
2016/06/23 07:44:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:27 [DEBUG] raft: Node 127.0.0.1:15021 updated peer set (2): [127.0.0.1:15021]
2016/06/23 07:44:27 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:27 [INFO] consul: member 'Node 15020' joined, marking health alive
2016/06/23 07:44:27 [INFO] consul: shutting down server
2016/06/23 07:44:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:27 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:27 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Apply_RootChange (1.93s)
=== RUN   TestACLEndpoint_Get
2016/06/23 07:44:28 [INFO] raft: Node at 127.0.0.1:15025 [Follower] entering Follower state
2016/06/23 07:44:28 [INFO] serf: EventMemberJoin: Node 15024 127.0.0.1
2016/06/23 07:44:28 [INFO] consul: adding LAN server Node 15024 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/06/23 07:44:28 [INFO] serf: EventMemberJoin: Node 15024.dc1 127.0.0.1
2016/06/23 07:44:28 [INFO] consul: adding WAN server Node 15024.dc1 (Addr: 127.0.0.1:15025) (DC: dc1)
2016/06/23 07:44:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:28 [INFO] raft: Node at 127.0.0.1:15025 [Candidate] entering Candidate state
2016/06/23 07:44:29 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:29 [DEBUG] raft: Vote granted from 127.0.0.1:15025. Tally: 1
2016/06/23 07:44:29 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:29 [INFO] raft: Node at 127.0.0.1:15025 [Leader] entering Leader state
2016/06/23 07:44:29 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:29 [INFO] consul: New leader elected: Node 15024
2016/06/23 07:44:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:29 [DEBUG] raft: Node 127.0.0.1:15025 updated peer set (2): [127.0.0.1:15025]
2016/06/23 07:44:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:30 [INFO] consul: member 'Node 15024' joined, marking health alive
2016/06/23 07:44:30 [INFO] consul: shutting down server
2016/06/23 07:44:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:30 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:30 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_Get (3.16s)
=== RUN   TestACLEndpoint_GetPolicy
2016/06/23 07:44:31 [INFO] serf: EventMemberJoin: Node 15028 127.0.0.1
2016/06/23 07:44:31 [INFO] raft: Node at 127.0.0.1:15029 [Follower] entering Follower state
2016/06/23 07:44:31 [INFO] consul: adding LAN server Node 15028 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/06/23 07:44:31 [INFO] serf: EventMemberJoin: Node 15028.dc1 127.0.0.1
2016/06/23 07:44:31 [INFO] consul: adding WAN server Node 15028.dc1 (Addr: 127.0.0.1:15029) (DC: dc1)
2016/06/23 07:44:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:31 [INFO] raft: Node at 127.0.0.1:15029 [Candidate] entering Candidate state
2016/06/23 07:44:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:32 [DEBUG] raft: Vote granted from 127.0.0.1:15029. Tally: 1
2016/06/23 07:44:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:32 [INFO] raft: Node at 127.0.0.1:15029 [Leader] entering Leader state
2016/06/23 07:44:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:32 [INFO] consul: New leader elected: Node 15028
2016/06/23 07:44:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:32 [DEBUG] raft: Node 127.0.0.1:15029 updated peer set (2): [127.0.0.1:15029]
2016/06/23 07:44:32 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:33 [INFO] consul: member 'Node 15028' joined, marking health alive
2016/06/23 07:44:34 [INFO] consul: shutting down server
2016/06/23 07:44:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACLEndpoint_GetPolicy (4.20s)
=== RUN   TestACLEndpoint_List
2016/06/23 07:44:35 [INFO] raft: Node at 127.0.0.1:15033 [Follower] entering Follower state
2016/06/23 07:44:35 [INFO] serf: EventMemberJoin: Node 15032 127.0.0.1
2016/06/23 07:44:35 [INFO] consul: adding LAN server Node 15032 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/06/23 07:44:35 [INFO] serf: EventMemberJoin: Node 15032.dc1 127.0.0.1
2016/06/23 07:44:35 [INFO] consul: adding WAN server Node 15032.dc1 (Addr: 127.0.0.1:15033) (DC: dc1)
2016/06/23 07:44:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:35 [INFO] raft: Node at 127.0.0.1:15033 [Candidate] entering Candidate state
2016/06/23 07:44:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:36 [DEBUG] raft: Vote granted from 127.0.0.1:15033. Tally: 1
2016/06/23 07:44:36 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:36 [INFO] raft: Node at 127.0.0.1:15033 [Leader] entering Leader state
2016/06/23 07:44:36 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:36 [INFO] consul: New leader elected: Node 15032
2016/06/23 07:44:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:37 [DEBUG] raft: Node 127.0.0.1:15033 updated peer set (2): [127.0.0.1:15033]
2016/06/23 07:44:37 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:37 [INFO] consul: member 'Node 15032' joined, marking health alive
2016/06/23 07:44:39 [INFO] consul: shutting down server
2016/06/23 07:44:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:40 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:40 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_List (4.91s)
=== RUN   TestACLEndpoint_List_Denied
2016/06/23 07:44:40 [INFO] raft: Node at 127.0.0.1:15037 [Follower] entering Follower state
2016/06/23 07:44:40 [INFO] serf: EventMemberJoin: Node 15036 127.0.0.1
2016/06/23 07:44:40 [INFO] serf: EventMemberJoin: Node 15036.dc1 127.0.0.1
2016/06/23 07:44:40 [INFO] consul: adding LAN server Node 15036 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/06/23 07:44:40 [INFO] consul: adding WAN server Node 15036.dc1 (Addr: 127.0.0.1:15037) (DC: dc1)
2016/06/23 07:44:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:40 [INFO] raft: Node at 127.0.0.1:15037 [Candidate] entering Candidate state
2016/06/23 07:44:41 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:41 [DEBUG] raft: Vote granted from 127.0.0.1:15037. Tally: 1
2016/06/23 07:44:41 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:41 [INFO] raft: Node at 127.0.0.1:15037 [Leader] entering Leader state
2016/06/23 07:44:41 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:41 [INFO] consul: New leader elected: Node 15036
2016/06/23 07:44:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:42 [DEBUG] raft: Node 127.0.0.1:15037 updated peer set (2): [127.0.0.1:15037]
2016/06/23 07:44:42 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:42 [INFO] consul: member 'Node 15036' joined, marking health alive
2016/06/23 07:44:42 [INFO] consul: shutting down server
2016/06/23 07:44:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:43 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:43 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACLEndpoint_List_Denied (2.97s)
=== RUN   TestACL_Disabled
2016/06/23 07:44:43 [INFO] raft: Node at 127.0.0.1:15041 [Follower] entering Follower state
2016/06/23 07:44:43 [INFO] serf: EventMemberJoin: Node 15040 127.0.0.1
2016/06/23 07:44:43 [INFO] consul: adding LAN server Node 15040 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/06/23 07:44:43 [INFO] serf: EventMemberJoin: Node 15040.dc1 127.0.0.1
2016/06/23 07:44:43 [INFO] consul: adding WAN server Node 15040.dc1 (Addr: 127.0.0.1:15041) (DC: dc1)
2016/06/23 07:44:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:43 [INFO] raft: Node at 127.0.0.1:15041 [Candidate] entering Candidate state
2016/06/23 07:44:44 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:44 [DEBUG] raft: Vote granted from 127.0.0.1:15041. Tally: 1
2016/06/23 07:44:44 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:44 [INFO] raft: Node at 127.0.0.1:15041 [Leader] entering Leader state
2016/06/23 07:44:44 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:44 [INFO] consul: New leader elected: Node 15040
2016/06/23 07:44:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:44 [DEBUG] raft: Node 127.0.0.1:15041 updated peer set (2): [127.0.0.1:15041]
2016/06/23 07:44:44 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:44 [INFO] consul: member 'Node 15040' joined, marking health alive
2016/06/23 07:44:44 [INFO] consul: shutting down server
2016/06/23 07:44:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:44 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:44 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Disabled (1.88s)
=== RUN   TestACL_ResolveRootACL
2016/06/23 07:44:45 [INFO] raft: Node at 127.0.0.1:15045 [Follower] entering Follower state
2016/06/23 07:44:45 [INFO] serf: EventMemberJoin: Node 15044 127.0.0.1
2016/06/23 07:44:45 [INFO] consul: adding LAN server Node 15044 (Addr: 127.0.0.1:15045) (DC: dc1)
2016/06/23 07:44:45 [INFO] serf: EventMemberJoin: Node 15044.dc1 127.0.0.1
2016/06/23 07:44:45 [INFO] consul: shutting down server
2016/06/23 07:44:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:45 [INFO] consul: adding WAN server Node 15044.dc1 (Addr: 127.0.0.1:15045) (DC: dc1)
2016/06/23 07:44:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:45 [INFO] raft: Node at 127.0.0.1:15045 [Candidate] entering Candidate state
2016/06/23 07:44:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:46 [DEBUG] raft: Votes needed: 1
--- PASS: TestACL_ResolveRootACL (1.46s)
=== RUN   TestACL_Authority_NotFound
2016/06/23 07:44:46 [INFO] raft: Node at 127.0.0.1:15049 [Follower] entering Follower state
2016/06/23 07:44:46 [INFO] serf: EventMemberJoin: Node 15048 127.0.0.1
2016/06/23 07:44:46 [INFO] consul: adding LAN server Node 15048 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/06/23 07:44:46 [INFO] serf: EventMemberJoin: Node 15048.dc1 127.0.0.1
2016/06/23 07:44:46 [INFO] consul: adding WAN server Node 15048.dc1 (Addr: 127.0.0.1:15049) (DC: dc1)
2016/06/23 07:44:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:46 [INFO] raft: Node at 127.0.0.1:15049 [Candidate] entering Candidate state
2016/06/23 07:44:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:47 [DEBUG] raft: Vote granted from 127.0.0.1:15049. Tally: 1
2016/06/23 07:44:47 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:47 [INFO] raft: Node at 127.0.0.1:15049 [Leader] entering Leader state
2016/06/23 07:44:47 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:47 [INFO] consul: New leader elected: Node 15048
2016/06/23 07:44:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:47 [DEBUG] raft: Node 127.0.0.1:15049 updated peer set (2): [127.0.0.1:15049]
2016/06/23 07:44:48 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:48 [INFO] consul: member 'Node 15048' joined, marking health alive
2016/06/23 07:44:48 [INFO] consul: shutting down server
2016/06/23 07:44:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:49 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_NotFound (2.86s)
=== RUN   TestACL_Authority_Found
2016/06/23 07:44:50 [INFO] raft: Node at 127.0.0.1:15053 [Follower] entering Follower state
2016/06/23 07:44:50 [INFO] serf: EventMemberJoin: Node 15052 127.0.0.1
2016/06/23 07:44:50 [INFO] consul: adding LAN server Node 15052 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/06/23 07:44:50 [INFO] serf: EventMemberJoin: Node 15052.dc1 127.0.0.1
2016/06/23 07:44:50 [INFO] consul: adding WAN server Node 15052.dc1 (Addr: 127.0.0.1:15053) (DC: dc1)
2016/06/23 07:44:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:50 [INFO] raft: Node at 127.0.0.1:15053 [Candidate] entering Candidate state
2016/06/23 07:44:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:50 [DEBUG] raft: Vote granted from 127.0.0.1:15053. Tally: 1
2016/06/23 07:44:50 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:50 [INFO] raft: Node at 127.0.0.1:15053 [Leader] entering Leader state
2016/06/23 07:44:50 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:50 [INFO] consul: New leader elected: Node 15052
2016/06/23 07:44:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:51 [DEBUG] raft: Node 127.0.0.1:15053 updated peer set (2): [127.0.0.1:15053]
2016/06/23 07:44:51 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:51 [INFO] consul: member 'Node 15052' joined, marking health alive
2016/06/23 07:44:52 [INFO] consul: shutting down server
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:52 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:44:52 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Authority_Found (3.31s)
=== RUN   TestACL_Authority_Anonymous_Found
2016/06/23 07:44:53 [INFO] raft: Node at 127.0.0.1:15057 [Follower] entering Follower state
2016/06/23 07:44:53 [INFO] serf: EventMemberJoin: Node 15056 127.0.0.1
2016/06/23 07:44:53 [INFO] consul: adding LAN server Node 15056 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/06/23 07:44:53 [INFO] serf: EventMemberJoin: Node 15056.dc1 127.0.0.1
2016/06/23 07:44:53 [INFO] consul: adding WAN server Node 15056.dc1 (Addr: 127.0.0.1:15057) (DC: dc1)
2016/06/23 07:44:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:53 [INFO] raft: Node at 127.0.0.1:15057 [Candidate] entering Candidate state
2016/06/23 07:44:53 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:53 [DEBUG] raft: Vote granted from 127.0.0.1:15057. Tally: 1
2016/06/23 07:44:53 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:53 [INFO] raft: Node at 127.0.0.1:15057 [Leader] entering Leader state
2016/06/23 07:44:53 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:53 [INFO] consul: New leader elected: Node 15056
2016/06/23 07:44:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:54 [DEBUG] raft: Node 127.0.0.1:15057 updated peer set (2): [127.0.0.1:15057]
2016/06/23 07:44:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:54 [INFO] consul: member 'Node 15056' joined, marking health alive
2016/06/23 07:44:54 [INFO] consul: shutting down server
2016/06/23 07:44:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:54 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/06/23 07:44:54 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_Authority_Anonymous_Found (2.41s)
=== RUN   TestACL_Authority_Master_Found
2016/06/23 07:44:55 [INFO] raft: Node at 127.0.0.1:15061 [Follower] entering Follower state
2016/06/23 07:44:55 [INFO] serf: EventMemberJoin: Node 15060 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: adding LAN server Node 15060 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/06/23 07:44:55 [INFO] serf: EventMemberJoin: Node 15060.dc1 127.0.0.1
2016/06/23 07:44:55 [INFO] consul: adding WAN server Node 15060.dc1 (Addr: 127.0.0.1:15061) (DC: dc1)
2016/06/23 07:44:55 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:55 [INFO] raft: Node at 127.0.0.1:15061 [Candidate] entering Candidate state
2016/06/23 07:44:56 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:56 [DEBUG] raft: Vote granted from 127.0.0.1:15061. Tally: 1
2016/06/23 07:44:56 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:56 [INFO] raft: Node at 127.0.0.1:15061 [Leader] entering Leader state
2016/06/23 07:44:56 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:56 [INFO] consul: New leader elected: Node 15060
2016/06/23 07:44:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:56 [DEBUG] raft: Node 127.0.0.1:15061 updated peer set (2): [127.0.0.1:15061]
2016/06/23 07:44:56 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:44:57 [INFO] consul: member 'Node 15060' joined, marking health alive
2016/06/23 07:44:57 [INFO] consul: shutting down server
2016/06/23 07:44:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:44:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Master_Found (2.72s)
=== RUN   TestACL_Authority_Management
2016/06/23 07:44:58 [INFO] raft: Node at 127.0.0.1:15065 [Follower] entering Follower state
2016/06/23 07:44:58 [INFO] serf: EventMemberJoin: Node 15064 127.0.0.1
2016/06/23 07:44:58 [INFO] consul: adding LAN server Node 15064 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/06/23 07:44:58 [INFO] serf: EventMemberJoin: Node 15064.dc1 127.0.0.1
2016/06/23 07:44:58 [INFO] consul: adding WAN server Node 15064.dc1 (Addr: 127.0.0.1:15065) (DC: dc1)
2016/06/23 07:44:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:44:58 [INFO] raft: Node at 127.0.0.1:15065 [Candidate] entering Candidate state
2016/06/23 07:44:59 [DEBUG] raft: Votes needed: 1
2016/06/23 07:44:59 [DEBUG] raft: Vote granted from 127.0.0.1:15065. Tally: 1
2016/06/23 07:44:59 [INFO] raft: Election won. Tally: 1
2016/06/23 07:44:59 [INFO] raft: Node at 127.0.0.1:15065 [Leader] entering Leader state
2016/06/23 07:44:59 [INFO] consul: cluster leadership acquired
2016/06/23 07:44:59 [INFO] consul: New leader elected: Node 15064
2016/06/23 07:44:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:44:59 [DEBUG] raft: Node 127.0.0.1:15065 updated peer set (2): [127.0.0.1:15065]
2016/06/23 07:44:59 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:00 [INFO] consul: member 'Node 15064' joined, marking health alive
2016/06/23 07:45:00 [INFO] consul: shutting down server
2016/06/23 07:45:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:00 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestACL_Authority_Management (3.27s)
=== RUN   TestACL_NonAuthority_NotFound
2016/06/23 07:45:01 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/06/23 07:45:01 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/06/23 07:45:01 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/06/23 07:45:01 [INFO] serf: EventMemberJoin: Node 15068.dc1 127.0.0.1
2016/06/23 07:45:01 [INFO] consul: adding WAN server Node 15068.dc1 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/06/23 07:45:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:01 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/06/23 07:45:02 [INFO] raft: Node at 127.0.0.1:15073 [Follower] entering Follower state
2016/06/23 07:45:02 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/06/23 07:45:02 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/06/23 07:45:02 [INFO] serf: EventMemberJoin: Node 15072.dc1 127.0.0.1
2016/06/23 07:45:02 [INFO] consul: adding WAN server Node 15072.dc1 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/06/23 07:45:02 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15070
2016/06/23 07:45:02 [DEBUG] memberlist: TCP connection from=127.0.0.1:46418
2016/06/23 07:45:02 [INFO] serf: EventMemberJoin: Node 15072 127.0.0.1
2016/06/23 07:45:02 [INFO] consul: adding LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/06/23 07:45:02 [INFO] serf: EventMemberJoin: Node 15068 127.0.0.1
2016/06/23 07:45:02 [INFO] consul: adding LAN server Node 15068 (Addr: 127.0.0.1:15069) (DC: dc1)
2016/06/23 07:45:02 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:02 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:02 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:02 [DEBUG] memberlist: Potential blocking operation. Last command took 12.114371ms
2016/06/23 07:45:02 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:02 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:02 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:02 [DEBUG] raft: Vote granted from 127.0.0.1:15069. Tally: 1
2016/06/23 07:45:02 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:02 [INFO] raft: Node at 127.0.0.1:15069 [Leader] entering Leader state
2016/06/23 07:45:02 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:02 [INFO] consul: New leader elected: Node 15068
2016/06/23 07:45:03 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:03 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [INFO] consul: New leader elected: Node 15068
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] serf: messageJoinType: Node 15072
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:03 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15069]
2016/06/23 07:45:03 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:03 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:03 [DEBUG] memberlist: Potential blocking operation. Last command took 10.076308ms
2016/06/23 07:45:03 [INFO] consul: member 'Node 15068' joined, marking health alive
2016/06/23 07:45:03 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069]
2016/06/23 07:45:03 [INFO] raft: Added peer 127.0.0.1:15073, starting replication
2016/06/23 07:45:03 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:39288
2016/06/23 07:45:03 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:39292
2016/06/23 07:45:03 [ERR] consul.acl: Failed to get policy for 'does not exist': No cluster leader
2016/06/23 07:45:03 [INFO] consul: shutting down server
2016/06/23 07:45:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:04 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/06/23 07:45:04 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/06/23 07:45:04 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/06/23 07:45:04 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:04 [DEBUG] raft: Failed to contact 127.0.0.1:15073 in 321.213498ms
2016/06/23 07:45:04 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:04 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15073: EOF
2016/06/23 07:45:04 [INFO] consul: cluster leadership lost
2016/06/23 07:45:04 [INFO] raft: Node at 127.0.0.1:15069 [Follower] entering Follower state
2016/06/23 07:45:04 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:04 [ERR] consul: failed to reconcile member: {Node 15072 127.0.0.1 15074 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15073 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:04 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:04 [INFO] raft: Node at 127.0.0.1:15069 [Candidate] entering Candidate state
2016/06/23 07:45:04 [DEBUG] memberlist: Failed UDP ping: Node 15072 (timeout reached)
2016/06/23 07:45:04 [INFO] memberlist: Suspect Node 15072 has failed, no acks received
2016/06/23 07:45:04 [INFO] memberlist: Marking Node 15072 as failed, suspect timeout reached
2016/06/23 07:45:04 [INFO] serf: EventMemberFailed: Node 15072 127.0.0.1
2016/06/23 07:45:04 [INFO] consul: removing LAN server Node 15072 (Addr: 127.0.0.1:15073) (DC: dc1)
2016/06/23 07:45:04 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:04 [ERR] raft: Failed to heartbeat to 127.0.0.1:15073: EOF
2016/06/23 07:45:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15073: dial tcp 127.0.0.1:15073: getsockopt: connection refused
2016/06/23 07:45:04 [INFO] consul: shutting down server
2016/06/23 07:45:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:05 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_NonAuthority_NotFound (4.05s)
	acl_test.go:246: err: <nil>
=== RUN   TestACL_NonAuthority_Found
2016/06/23 07:45:05 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/06/23 07:45:05 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/06/23 07:45:05 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/06/23 07:45:05 [INFO] serf: EventMemberJoin: Node 15076.dc1 127.0.0.1
2016/06/23 07:45:05 [INFO] consul: adding WAN server Node 15076.dc1 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/06/23 07:45:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:05 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:06 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:06 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:06 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:06 [INFO] raft: Node at 127.0.0.1:15077 [Leader] entering Leader state
2016/06/23 07:45:06 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:06 [INFO] consul: New leader elected: Node 15076
2016/06/23 07:45:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:06 [INFO] raft: Node at 127.0.0.1:15081 [Follower] entering Follower state
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 15080.dc1 127.0.0.1
2016/06/23 07:45:06 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/06/23 07:45:06 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15078
2016/06/23 07:45:06 [INFO] consul: adding WAN server Node 15080.dc1 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/06/23 07:45:06 [DEBUG] memberlist: TCP connection from=127.0.0.1:42538
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 15080 127.0.0.1
2016/06/23 07:45:06 [INFO] consul: adding LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/06/23 07:45:06 [INFO] serf: EventMemberJoin: Node 15076 127.0.0.1
2016/06/23 07:45:06 [INFO] consul: adding LAN server Node 15076 (Addr: 127.0.0.1:15077) (DC: dc1)
2016/06/23 07:45:06 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:06 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15077]
2016/06/23 07:45:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:06 [DEBUG] serf: messageJoinType: Node 15080
2016/06/23 07:45:07 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15081 127.0.0.1:15077]
2016/06/23 07:45:07 [INFO] raft: Added peer 127.0.0.1:15081, starting replication
2016/06/23 07:45:07 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:45784
2016/06/23 07:45:07 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:45786
2016/06/23 07:45:07 [DEBUG] raft: Failed to contact 127.0.0.1:15081 in 249.665641ms
2016/06/23 07:45:07 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/06/23 07:45:07 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:07 [INFO] raft: Node at 127.0.0.1:15077 [Follower] entering Follower state
2016/06/23 07:45:07 [INFO] consul: cluster leadership lost
2016/06/23 07:45:07 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:07 [WARN] raft: AppendEntries to 127.0.0.1:15081 rejected, sending older logs (next: 1)
2016/06/23 07:45:07 [ERR] consul: failed to reconcile member: {Node 15080 127.0.0.1 15082 map[port:15081 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:07 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:07 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:45:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:07 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:07 [DEBUG] raft-net: 127.0.0.1:15081 accepted connection from: 127.0.0.1:45788
2016/06/23 07:45:07 [DEBUG] raft: Node 127.0.0.1:15081 updated peer set (2): [127.0.0.1:15077]
2016/06/23 07:45:07 [INFO] raft: pipelining replication to peer 127.0.0.1:15081
2016/06/23 07:45:07 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15081
2016/06/23 07:45:07 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:07 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:07 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:07 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:08 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:08 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:08 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:08 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:08 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:08 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:09 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:09 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:09 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:09 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:09 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:09 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:10 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:10 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:10 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:10 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:10 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:10 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:10 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:10 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:10 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:11 [DEBUG] raft-net: 127.0.0.1:15077 accepted connection from: 127.0.0.1:41636
2016/06/23 07:45:11 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:11 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:45:11 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:11 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:11 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:11 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:11 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:45:11 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:11 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:11 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:11 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:11 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:11 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:45:11 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:11 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:11 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:11 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:45:11 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:12 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:12 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:12 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:12 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:12 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:45:12 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:12 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:12 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:12 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:12 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:45:12 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:12 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:13 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:13 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:13 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:45:13 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:13 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:45:13 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:13 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:13 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:13 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:13 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:13 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:13 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:13 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:45:13 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:45:13 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:13 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:13 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:14 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:14 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:45:14 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:14 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:14 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:14 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:14 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:14 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:45:14 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:14 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:15 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:15 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:15 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:45:15 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:15 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:15 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:15 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:15 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:45:15 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:15 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:16 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:16 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:45:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:16 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:16 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:45:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:16 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:16 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:45:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:16 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:45:16 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:16 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:17 [DEBUG] raft: Vote granted from 127.0.0.1:15081. Tally: 1
2016/06/23 07:45:17 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:45:17 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:17 [INFO] raft: Node at 127.0.0.1:15081 [Candidate] entering Candidate state
2016/06/23 07:45:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:17 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:17 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:45:17 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:17 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:17 [INFO] consul: shutting down server
2016/06/23 07:45:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:17 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/06/23 07:45:17 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:17 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15081: EOF
2016/06/23 07:45:17 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/06/23 07:45:17 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/06/23 07:45:17 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/06/23 07:45:17 [INFO] memberlist: Marking Node 15080 as failed, suspect timeout reached
2016/06/23 07:45:17 [INFO] serf: EventMemberFailed: Node 15080 127.0.0.1
2016/06/23 07:45:17 [INFO] consul: removing LAN server Node 15080 (Addr: 127.0.0.1:15081) (DC: dc1)
2016/06/23 07:45:17 [DEBUG] memberlist: Failed UDP ping: Node 15080 (timeout reached)
2016/06/23 07:45:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:18 [INFO] memberlist: Suspect Node 15080 has failed, no acks received
2016/06/23 07:45:18 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:18 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:45:18 [DEBUG] raft: Vote granted from 127.0.0.1:15077. Tally: 1
2016/06/23 07:45:18 [INFO] consul: shutting down server
2016/06/23 07:45:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:18 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:45:18 [INFO] raft: Node at 127.0.0.1:15077 [Candidate] entering Candidate state
2016/06/23 07:45:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:19 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:19 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15081: EOF
2016/06/23 07:45:19 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_NonAuthority_Found (14.31s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestACL_NonAuthority_Management
2016/06/23 07:45:19 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/06/23 07:45:19 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/06/23 07:45:19 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/06/23 07:45:19 [INFO] serf: EventMemberJoin: Node 15084.dc1 127.0.0.1
2016/06/23 07:45:19 [INFO] consul: adding WAN server Node 15084.dc1 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/06/23 07:45:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:19 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/06/23 07:45:20 [INFO] raft: Node at 127.0.0.1:15089 [Follower] entering Follower state
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 15088.dc1 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding WAN server Node 15088.dc1 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/06/23 07:45:20 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15086
2016/06/23 07:45:20 [DEBUG] memberlist: TCP connection from=127.0.0.1:40784
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 15088 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding LAN server Node 15088 (Addr: 127.0.0.1:15089) (DC: dc1)
2016/06/23 07:45:20 [INFO] serf: EventMemberJoin: Node 15084 127.0.0.1
2016/06/23 07:45:20 [INFO] consul: adding LAN server Node 15084 (Addr: 127.0.0.1:15085) (DC: dc1)
2016/06/23 07:45:20 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:20 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:20 [DEBUG] raft: Vote granted from 127.0.0.1:15085. Tally: 1
2016/06/23 07:45:20 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:20 [INFO] raft: Node at 127.0.0.1:15085 [Leader] entering Leader state
2016/06/23 07:45:20 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:20 [INFO] consul: New leader elected: Node 15084
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [INFO] consul: New leader elected: Node 15084
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageJoinType: Node 15088
2016/06/23 07:45:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:20 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15085]
2016/06/23 07:45:20 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:21 [INFO] consul: member 'Node 15084' joined, marking health alive
2016/06/23 07:45:21 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15089 127.0.0.1:15085]
2016/06/23 07:45:21 [INFO] raft: Added peer 127.0.0.1:15089, starting replication
2016/06/23 07:45:21 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:41774
2016/06/23 07:45:21 [DEBUG] raft-net: 127.0.0.1:15089 accepted connection from: 127.0.0.1:41776
2016/06/23 07:45:21 [ERR] consul.acl: Failed to get policy for 'foobar': No cluster leader
2016/06/23 07:45:21 [INFO] consul: shutting down server
2016/06/23 07:45:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:21 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/06/23 07:45:21 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:21 [DEBUG] raft: Failed to contact 127.0.0.1:15089 in 220.403746ms
2016/06/23 07:45:21 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:21 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15089: EOF
2016/06/23 07:45:21 [INFO] consul: cluster leadership lost
2016/06/23 07:45:21 [INFO] raft: Node at 127.0.0.1:15085 [Follower] entering Follower state
2016/06/23 07:45:21 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:21 [ERR] consul: failed to reconcile member: {Node 15088 127.0.0.1 15090 map[port:15089 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:21 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:21 [DEBUG] memberlist: Failed UDP ping: Node 15088 (timeout reached)
2016/06/23 07:45:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:21 [INFO] raft: Node at 127.0.0.1:15085 [Candidate] entering Candidate state
2016/06/23 07:45:21 [INFO] memberlist: Suspect Node 15088 has failed, no acks received
2016/06/23 07:45:22 [INFO] consul: shutting down server
2016/06/23 07:45:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:22 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: EOF
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:22 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [INFO] memberlist: Marking Node 15088 as failed, suspect timeout reached
2016/06/23 07:45:22 [INFO] serf: EventMemberFailed: Node 15088 127.0.0.1
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:22 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:22 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:23 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:24 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:27 [ERR] raft: Failed to heartbeat to 127.0.0.1:15089: dial tcp 127.0.0.1:15089: getsockopt: connection refused
2016/06/23 07:45:31 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15089: read tcp 127.0.0.1:41778->127.0.0.1:15089: i/o timeout
--- FAIL: TestACL_NonAuthority_Management (12.58s)
	acl_test.go:380: unexpected failed read
=== RUN   TestACL_DownPolicy_Deny
2016/06/23 07:45:32 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/06/23 07:45:32 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/06/23 07:45:32 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/06/23 07:45:32 [INFO] serf: EventMemberJoin: Node 15092.dc1 127.0.0.1
2016/06/23 07:45:32 [INFO] consul: adding WAN server Node 15092.dc1 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/06/23 07:45:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:32 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/06/23 07:45:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:33 [DEBUG] raft: Vote granted from 127.0.0.1:15093. Tally: 1
2016/06/23 07:45:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:33 [INFO] raft: Node at 127.0.0.1:15093 [Leader] entering Leader state
2016/06/23 07:45:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:33 [INFO] consul: New leader elected: Node 15092
2016/06/23 07:45:33 [INFO] raft: Node at 127.0.0.1:15097 [Follower] entering Follower state
2016/06/23 07:45:33 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/06/23 07:45:33 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/06/23 07:45:33 [INFO] serf: EventMemberJoin: Node 15096.dc1 127.0.0.1
2016/06/23 07:45:33 [INFO] consul: adding WAN server Node 15096.dc1 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/06/23 07:45:33 [DEBUG] memberlist: TCP connection from=127.0.0.1:49622
2016/06/23 07:45:33 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15094
2016/06/23 07:45:33 [INFO] serf: EventMemberJoin: Node 15092 127.0.0.1
2016/06/23 07:45:33 [INFO] consul: adding LAN server Node 15092 (Addr: 127.0.0.1:15093) (DC: dc1)
2016/06/23 07:45:33 [INFO] serf: EventMemberJoin: Node 15096 127.0.0.1
2016/06/23 07:45:33 [INFO] consul: adding LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/06/23 07:45:33 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:33 [DEBUG] memberlist: Potential blocking operation. Last command took 16.237497ms
2016/06/23 07:45:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] serf: messageJoinType: Node 15096
2016/06/23 07:45:33 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15093]
2016/06/23 07:45:33 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:34 [INFO] consul: member 'Node 15092' joined, marking health alive
2016/06/23 07:45:34 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15097 127.0.0.1:15093]
2016/06/23 07:45:34 [INFO] raft: Added peer 127.0.0.1:15097, starting replication
2016/06/23 07:45:34 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:54706
2016/06/23 07:45:34 [DEBUG] raft-net: 127.0.0.1:15097 accepted connection from: 127.0.0.1:54708
2016/06/23 07:45:34 [DEBUG] raft: Failed to contact 127.0.0.1:15097 in 186.385704ms
2016/06/23 07:45:34 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/06/23 07:45:34 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:34 [INFO] raft: Node at 127.0.0.1:15093 [Follower] entering Follower state
2016/06/23 07:45:34 [ERR] consul.acl: Apply failed: node is not the leader
2016/06/23 07:45:34 [INFO] consul: shutting down server
2016/06/23 07:45:34 [WARN] raft: AppendEntries to 127.0.0.1:15097 rejected, sending older logs (next: 1)
2016/06/23 07:45:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:34 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:34 [ERR] consul: failed to reconcile member: {Node 15096 127.0.0.1 15098 map[vsn:2 vsn_min:1 vsn_max:3 build: port:15097 role:consul dc:dc1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:34 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:34 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:45:34 [INFO] consul: cluster leadership lost
2016/06/23 07:45:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:34 [INFO] raft: Node at 127.0.0.1:15093 [Candidate] entering Candidate state
2016/06/23 07:45:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:34 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:34 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15097: EOF
2016/06/23 07:45:34 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/06/23 07:45:34 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/06/23 07:45:34 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/06/23 07:45:34 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/06/23 07:45:34 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:34 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15097: EOF
2016/06/23 07:45:34 [INFO] memberlist: Marking Node 15096 as failed, suspect timeout reached
2016/06/23 07:45:34 [INFO] serf: EventMemberFailed: Node 15096 127.0.0.1
2016/06/23 07:45:34 [INFO] consul: removing LAN server Node 15096 (Addr: 127.0.0.1:15097) (DC: dc1)
2016/06/23 07:45:35 [DEBUG] memberlist: Failed UDP ping: Node 15096 (timeout reached)
2016/06/23 07:45:35 [DEBUG] raft: Node 127.0.0.1:15097 updated peer set (2): [127.0.0.1:15093]
2016/06/23 07:45:35 [INFO] consul: shutting down server
2016/06/23 07:45:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:35 [INFO] memberlist: Suspect Node 15096 has failed, no acks received
2016/06/23 07:45:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:35 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_DownPolicy_Deny (3.33s)
	acl_test.go:431: err: node is not the leader
=== RUN   TestACL_DownPolicy_Allow
2016/06/23 07:45:35 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15100.dc1 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding WAN server Node 15100.dc1 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/06/23 07:45:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:36 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/06/23 07:45:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:36 [DEBUG] raft: Vote granted from 127.0.0.1:15101. Tally: 1
2016/06/23 07:45:36 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:36 [INFO] raft: Node at 127.0.0.1:15101 [Leader] entering Leader state
2016/06/23 07:45:36 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:36 [INFO] consul: New leader elected: Node 15100
2016/06/23 07:45:36 [INFO] raft: Node at 127.0.0.1:15105 [Follower] entering Follower state
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15104.dc1 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding WAN server Node 15104.dc1 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/06/23 07:45:36 [DEBUG] memberlist: TCP connection from=127.0.0.1:49826
2016/06/23 07:45:36 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15102
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15104 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding LAN server Node 15104 (Addr: 127.0.0.1:15105) (DC: dc1)
2016/06/23 07:45:36 [INFO] serf: EventMemberJoin: Node 15100 127.0.0.1
2016/06/23 07:45:36 [INFO] consul: adding LAN server Node 15100 (Addr: 127.0.0.1:15101) (DC: dc1)
2016/06/23 07:45:36 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15101]
2016/06/23 07:45:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:36 [DEBUG] serf: messageJoinType: Node 15104
2016/06/23 07:45:37 [INFO] consul: member 'Node 15100' joined, marking health alive
2016/06/23 07:45:37 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15105 127.0.0.1:15101]
2016/06/23 07:45:37 [INFO] raft: Added peer 127.0.0.1:15105, starting replication
2016/06/23 07:45:37 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:45116
2016/06/23 07:45:37 [DEBUG] raft-net: 127.0.0.1:15105 accepted connection from: 127.0.0.1:45118
2016/06/23 07:45:37 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/06/23 07:45:37 [WARN] raft: AppendEntries to 127.0.0.1:15105 rejected, sending older logs (next: 1)
2016/06/23 07:45:37 [DEBUG] raft: Failed to contact 127.0.0.1:15105 in 138.941586ms
2016/06/23 07:45:37 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:37 [INFO] raft: Node at 127.0.0.1:15101 [Follower] entering Follower state
2016/06/23 07:45:37 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:37 [INFO] consul: cluster leadership lost
2016/06/23 07:45:37 [ERR] consul.acl: Apply failed: leadership lost while committing log
2016/06/23 07:45:37 [INFO] consul: shutting down server
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [ERR] consul: failed to reconcile member: {Node 15104 127.0.0.1 15106 map[build: port:15105 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:37 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:37 [INFO] raft: Node at 127.0.0.1:15101 [Candidate] entering Candidate state
2016/06/23 07:45:37 [DEBUG] raft: Node 127.0.0.1:15105 updated peer set (2): [127.0.0.1:15101]
2016/06/23 07:45:37 [INFO] raft: pipelining replication to peer 127.0.0.1:15105
2016/06/23 07:45:37 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15105
2016/06/23 07:45:37 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:37 [ERR] raft: Failed to heartbeat to 127.0.0.1:15105: EOF
2016/06/23 07:45:37 [INFO] consul: shutting down server
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [DEBUG] memberlist: Failed UDP ping: Node 15104 (timeout reached)
2016/06/23 07:45:37 [INFO] memberlist: Suspect Node 15104 has failed, no acks received
2016/06/23 07:45:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:37 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15105: dial tcp 127.0.0.1:15105: getsockopt: connection refused
2016/06/23 07:45:38 [INFO] memberlist: Marking Node 15104 as failed, suspect timeout reached
2016/06/23 07:45:38 [INFO] serf: EventMemberFailed: Node 15104 127.0.0.1
2016/06/23 07:45:38 [DEBUG] raft: Votes needed: 2
--- FAIL: TestACL_DownPolicy_Allow (3.00s)
	acl_test.go:505: err: leadership lost while committing log
=== RUN   TestACL_DownPolicy_ExtendCache
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15108.dc1 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding WAN server Node 15108.dc1 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/06/23 07:45:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/06/23 07:45:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:39 [DEBUG] raft: Vote granted from 127.0.0.1:15109. Tally: 1
2016/06/23 07:45:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:15109 [Leader] entering Leader state
2016/06/23 07:45:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:39 [INFO] consul: New leader elected: Node 15108
2016/06/23 07:45:39 [INFO] raft: Node at 127.0.0.1:15113 [Follower] entering Follower state
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15112.dc1 127.0.0.1
2016/06/23 07:45:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15110
2016/06/23 07:45:39 [INFO] consul: adding WAN server Node 15112.dc1 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/06/23 07:45:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:44442
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15112 127.0.0.1
2016/06/23 07:45:39 [INFO] serf: EventMemberJoin: Node 15108 127.0.0.1
2016/06/23 07:45:39 [INFO] consul: adding LAN server Node 15112 (Addr: 127.0.0.1:15113) (DC: dc1)
2016/06/23 07:45:39 [INFO] consul: adding LAN server Node 15108 (Addr: 127.0.0.1:15109) (DC: dc1)
2016/06/23 07:45:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:45:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:45:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:39 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15109]
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:39 [DEBUG] serf: messageJoinType: Node 15112
2016/06/23 07:45:40 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:40 [INFO] consul: member 'Node 15108' joined, marking health alive
2016/06/23 07:45:40 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15113 127.0.0.1:15109]
2016/06/23 07:45:40 [INFO] raft: Added peer 127.0.0.1:15113, starting replication
2016/06/23 07:45:40 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:59342
2016/06/23 07:45:40 [DEBUG] raft-net: 127.0.0.1:15113 accepted connection from: 127.0.0.1:59344
2016/06/23 07:45:41 [WARN] raft: Failed to get previous log: 5 log not found (last: 0)
2016/06/23 07:45:41 [WARN] raft: AppendEntries to 127.0.0.1:15113 rejected, sending older logs (next: 1)
2016/06/23 07:45:41 [DEBUG] raft: Failed to contact 127.0.0.1:15113 in 152.452333ms
2016/06/23 07:45:41 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:45:41 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/06/23 07:45:41 [INFO] consul: cluster leadership lost
2016/06/23 07:45:41 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:45:41 [ERR] consul.acl: Apply failed: leadership lost while committing log
2016/06/23 07:45:41 [ERR] consul: failed to reconcile member: {Node 15112 127.0.0.1 15114 map[port:15113 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build:] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:45:41 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:45:41 [INFO] consul: shutting down server
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:41 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [DEBUG] memberlist: Failed UDP ping: Node 15112 (timeout reached)
2016/06/23 07:45:41 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:41 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:45:41 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15113: EOF
2016/06/23 07:45:41 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15113: EOF
2016/06/23 07:45:41 [INFO] memberlist: Suspect Node 15112 has failed, no acks received
2016/06/23 07:45:41 [DEBUG] memberlist: Failed UDP ping: Node 15112 (timeout reached)
2016/06/23 07:45:41 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15109]
2016/06/23 07:45:41 [INFO] consul: shutting down server
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [INFO] memberlist: Suspect Node 15112 has failed, no acks received
2016/06/23 07:45:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:41 [INFO] memberlist: Marking Node 15112 as failed, suspect timeout reached
2016/06/23 07:45:41 [INFO] serf: EventMemberFailed: Node 15112 127.0.0.1
2016/06/23 07:45:41 [DEBUG] raft: Votes needed: 2
2016/06/23 07:45:51 [ERR] raft: Failed to heartbeat to 127.0.0.1:15113: read tcp 127.0.0.1:59348->127.0.0.1:15113: i/o timeout
2016/06/23 07:45:51 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15113: read tcp 127.0.0.1:59350->127.0.0.1:15113: i/o timeout
--- FAIL: TestACL_DownPolicy_ExtendCache (13.26s)
	acl_test.go:581: err: leadership lost while committing log
=== RUN   TestACL_MultiDC_Found
2016/06/23 07:45:52 [INFO] raft: Node at 127.0.0.1:15117 [Follower] entering Follower state
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15116 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: adding LAN server Node 15116 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/06/23 07:45:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:52 [INFO] raft: Node at 127.0.0.1:15117 [Candidate] entering Candidate state
2016/06/23 07:45:52 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:52 [DEBUG] raft: Vote granted from 127.0.0.1:15117. Tally: 1
2016/06/23 07:45:52 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:52 [INFO] raft: Node at 127.0.0.1:15117 [Leader] entering Leader state
2016/06/23 07:45:52 [INFO] raft: Node at 127.0.0.1:15121 [Follower] entering Follower state
2016/06/23 07:45:52 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15120 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: New leader elected: Node 15116
2016/06/23 07:45:52 [INFO] consul: adding LAN server Node 15120 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/06/23 07:45:52 [DEBUG] memberlist: TCP connection from=127.0.0.1:35880
2016/06/23 07:45:52 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15119
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15120.dc2 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: adding WAN server Node 15120.dc2 (Addr: 127.0.0.1:15121) (DC: dc2)
2016/06/23 07:45:52 [INFO] serf: EventMemberJoin: Node 15116.dc1 127.0.0.1
2016/06/23 07:45:52 [INFO] consul: adding WAN server Node 15116.dc1 (Addr: 127.0.0.1:15117) (DC: dc1)
2016/06/23 07:45:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:52 [INFO] raft: Node at 127.0.0.1:15121 [Candidate] entering Candidate state
2016/06/23 07:45:52 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:52 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:52 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [DEBUG] memberlist: Potential blocking operation. Last command took 11.761026ms
2016/06/23 07:45:53 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:53 [DEBUG] serf: messageJoinType: Node 15120.dc2
2016/06/23 07:45:53 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15117]
2016/06/23 07:45:53 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:54 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:54 [DEBUG] raft: Vote granted from 127.0.0.1:15121. Tally: 1
2016/06/23 07:45:54 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:54 [INFO] raft: Node at 127.0.0.1:15121 [Leader] entering Leader state
2016/06/23 07:45:54 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:54 [INFO] consul: New leader elected: Node 15120
2016/06/23 07:45:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:54 [INFO] consul: member 'Node 15116' joined, marking health alive
2016/06/23 07:45:54 [DEBUG] raft: Node 127.0.0.1:15121 updated peer set (2): [127.0.0.1:15121]
2016/06/23 07:45:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:54 [INFO] consul: member 'Node 15120' joined, marking health alive
2016/06/23 07:45:54 [INFO] consul: shutting down server
2016/06/23 07:45:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:54 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:45:54 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/06/23 07:45:54 [INFO] consul: shutting down server
2016/06/23 07:45:54 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:55 [DEBUG] memberlist: Failed UDP ping: Node 15120.dc2 (timeout reached)
2016/06/23 07:45:55 [WARN] serf: Shutdown without a Leave
2016/06/23 07:45:55 [INFO] memberlist: Suspect Node 15120.dc2 has failed, no acks received
2016/06/23 07:45:55 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:45:55 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_MultiDC_Found (3.68s)
=== RUN   TestACL_filterHealthChecks
2016/06/23 07:45:55 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterHealthChecks (0.00s)
=== RUN   TestACL_filterServices
2016/06/23 07:45:55 [DEBUG] consul: dropping service "service1" from result due to ACLs
2016/06/23 07:45:55 [DEBUG] consul: dropping service "service2" from result due to ACLs
--- PASS: TestACL_filterServices (0.00s)
=== RUN   TestACL_filterServiceNodes
2016/06/23 07:45:55 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterServiceNodes (0.00s)
=== RUN   TestACL_filterNodeServices
2016/06/23 07:45:55 [DEBUG] consul: dropping service "foo" from result due to ACLs
--- PASS: TestACL_filterNodeServices (0.00s)
=== RUN   TestACL_filterCheckServiceNodes
2016/06/23 07:45:55 [DEBUG] consul: dropping node "node1" from result due to ACLs
--- PASS: TestACL_filterCheckServiceNodes (0.00s)
=== RUN   TestACL_filterNodeDump
2016/06/23 07:45:55 [DEBUG] consul: dropping service "foo" from result due to ACLs
2016/06/23 07:45:55 [DEBUG] consul: dropping check "check1" from result due to ACLs
--- PASS: TestACL_filterNodeDump (0.00s)
=== RUN   TestACL_redactPreparedQueryTokens
--- PASS: TestACL_redactPreparedQueryTokens (0.00s)
=== RUN   TestACL_filterPreparedQueries
2016/06/23 07:45:55 [DEBUG] consul: dropping prepared query "f004177f-2c28-83b7-4229-eacc25fe55d1" from result due to ACLs
2016/06/23 07:45:55 [DEBUG] consul: dropping prepared query "f004177f-2c28-83b7-4229-eacc25fe55d2" from result due to ACLs
2016/06/23 07:45:55 [DEBUG] consul: dropping prepared query "f004177f-2c28-83b7-4229-eacc25fe55d3" from result due to ACLs
--- PASS: TestACL_filterPreparedQueries (0.00s)
=== RUN   TestACL_unhandledFilterType
2016/06/23 07:45:55 [INFO] memberlist: Marking Node 15120.dc2 as failed, suspect timeout reached
2016/06/23 07:45:55 [INFO] serf: EventMemberFailed: Node 15120.dc2 127.0.0.1
2016/06/23 07:45:56 [INFO] raft: Node at 127.0.0.1:15125 [Follower] entering Follower state
2016/06/23 07:45:56 [INFO] serf: EventMemberJoin: Node 15124 127.0.0.1
2016/06/23 07:45:56 [INFO] consul: adding LAN server Node 15124 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/06/23 07:45:56 [INFO] serf: EventMemberJoin: Node 15124.dc1 127.0.0.1
2016/06/23 07:45:56 [INFO] consul: adding WAN server Node 15124.dc1 (Addr: 127.0.0.1:15125) (DC: dc1)
2016/06/23 07:45:56 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:45:56 [INFO] raft: Node at 127.0.0.1:15125 [Candidate] entering Candidate state
2016/06/23 07:45:56 [DEBUG] raft: Votes needed: 1
2016/06/23 07:45:56 [DEBUG] raft: Vote granted from 127.0.0.1:15125. Tally: 1
2016/06/23 07:45:56 [INFO] raft: Election won. Tally: 1
2016/06/23 07:45:56 [INFO] raft: Node at 127.0.0.1:15125 [Leader] entering Leader state
2016/06/23 07:45:56 [INFO] consul: cluster leadership acquired
2016/06/23 07:45:56 [INFO] consul: New leader elected: Node 15124
2016/06/23 07:45:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:45:57 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15125]
2016/06/23 07:45:57 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:45:58 [INFO] consul: member 'Node 15124' joined, marking health alive
2016/06/23 07:46:00 [INFO] consul: shutting down server
2016/06/23 07:46:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:00 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:46:00 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestACL_unhandledFilterType (5.39s)
=== RUN   TestCatalogRegister
2016/06/23 07:46:01 [INFO] raft: Node at 127.0.0.1:15129 [Follower] entering Follower state
2016/06/23 07:46:01 [INFO] serf: EventMemberJoin: Node 15128 127.0.0.1
2016/06/23 07:46:01 [INFO] consul: adding LAN server Node 15128 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/06/23 07:46:01 [INFO] serf: EventMemberJoin: Node 15128.dc1 127.0.0.1
2016/06/23 07:46:01 [INFO] consul: adding WAN server Node 15128.dc1 (Addr: 127.0.0.1:15129) (DC: dc1)
2016/06/23 07:46:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:01 [INFO] raft: Node at 127.0.0.1:15129 [Candidate] entering Candidate state
2016/06/23 07:46:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:02 [DEBUG] raft: Vote granted from 127.0.0.1:15129. Tally: 1
2016/06/23 07:46:02 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:02 [INFO] raft: Node at 127.0.0.1:15129 [Leader] entering Leader state
2016/06/23 07:46:02 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:02 [INFO] consul: New leader elected: Node 15128
2016/06/23 07:46:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:02 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15129]
2016/06/23 07:46:02 [DEBUG] consul: reset tombstone GC to index 3
2016/06/23 07:46:02 [INFO] consul: member 'Node 15128' joined, marking health alive
2016/06/23 07:46:02 [INFO] consul: shutting down server
2016/06/23 07:46:02 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:03 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:46:03 [ERR] consul: failed to reconcile member: {Node 15128 127.0.0.1 15130 map[bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15129] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:03 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestCatalogRegister (2.79s)
=== RUN   TestCatalogRegister_ACLDeny
2016/06/23 07:46:03 [INFO] raft: Node at 127.0.0.1:15133 [Follower] entering Follower state
2016/06/23 07:46:03 [INFO] serf: EventMemberJoin: Node 15132 127.0.0.1
2016/06/23 07:46:03 [INFO] consul: adding LAN server Node 15132 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/06/23 07:46:04 [INFO] serf: EventMemberJoin: Node 15132.dc1 127.0.0.1
2016/06/23 07:46:04 [INFO] consul: adding WAN server Node 15132.dc1 (Addr: 127.0.0.1:15133) (DC: dc1)
2016/06/23 07:46:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:04 [INFO] raft: Node at 127.0.0.1:15133 [Candidate] entering Candidate state
2016/06/23 07:46:04 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:04 [DEBUG] raft: Vote granted from 127.0.0.1:15133. Tally: 1
2016/06/23 07:46:04 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:04 [INFO] raft: Node at 127.0.0.1:15133 [Leader] entering Leader state
2016/06/23 07:46:04 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:04 [INFO] consul: New leader elected: Node 15132
2016/06/23 07:46:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:05 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15133]
2016/06/23 07:46:05 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:05 [INFO] consul: member 'Node 15132' joined, marking health alive
2016/06/23 07:46:06 [WARN] consul.catalog: Register of service 'db' on 'foo' denied due to ACLs
2016/06/23 07:46:07 [INFO] consul: shutting down server
2016/06/23 07:46:07 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:08 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ACLDeny (4.77s)
=== RUN   TestCatalogRegister_ForwardLeader
2016/06/23 07:46:08 [INFO] raft: Node at 127.0.0.1:15137 [Follower] entering Follower state
2016/06/23 07:46:08 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/06/23 07:46:08 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/06/23 07:46:08 [INFO] serf: EventMemberJoin: Node 15136.dc1 127.0.0.1
2016/06/23 07:46:08 [INFO] consul: adding WAN server Node 15136.dc1 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/06/23 07:46:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:08 [INFO] raft: Node at 127.0.0.1:15137 [Candidate] entering Candidate state
2016/06/23 07:46:09 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:09 [DEBUG] raft: Vote granted from 127.0.0.1:15137. Tally: 1
2016/06/23 07:46:09 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:09 [INFO] raft: Node at 127.0.0.1:15137 [Leader] entering Leader state
2016/06/23 07:46:09 [INFO] raft: Node at 127.0.0.1:15141 [Follower] entering Follower state
2016/06/23 07:46:09 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:09 [INFO] consul: New leader elected: Node 15136
2016/06/23 07:46:09 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/06/23 07:46:09 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/06/23 07:46:09 [INFO] serf: EventMemberJoin: Node 15140.dc1 127.0.0.1
2016/06/23 07:46:09 [INFO] consul: adding WAN server Node 15140.dc1 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/06/23 07:46:09 [DEBUG] memberlist: TCP connection from=127.0.0.1:55722
2016/06/23 07:46:09 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15138
2016/06/23 07:46:09 [INFO] serf: EventMemberJoin: Node 15140 127.0.0.1
2016/06/23 07:46:09 [INFO] consul: adding LAN server Node 15140 (Addr: 127.0.0.1:15141) (DC: dc1)
2016/06/23 07:46:09 [INFO] serf: EventMemberJoin: Node 15136 127.0.0.1
2016/06/23 07:46:09 [INFO] consul: adding LAN server Node 15136 (Addr: 127.0.0.1:15137) (DC: dc1)
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:09 [INFO] raft: Node at 127.0.0.1:15141 [Candidate] entering Candidate state
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:09 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15137]
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] serf: messageJoinType: Node 15140
2016/06/23 07:46:09 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:09 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/06/23 07:46:10 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:10 [DEBUG] raft: Vote granted from 127.0.0.1:15141. Tally: 1
2016/06/23 07:46:10 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:10 [INFO] raft: Node at 127.0.0.1:15141 [Leader] entering Leader state
2016/06/23 07:46:10 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:10 [INFO] consul: New leader elected: Node 15140
2016/06/23 07:46:10 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:10 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [INFO] consul: New leader elected: Node 15140
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:10 [DEBUG] raft: Node 127.0.0.1:15141 updated peer set (2): [127.0.0.1:15141]
2016/06/23 07:46:10 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:10 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:10 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:10 [INFO] consul: member 'Node 15140' joined, marking health alive
2016/06/23 07:46:10 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:10 [ERR] consul: 'Node 15136' and 'Node 15140' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:10 [INFO] consul: member 'Node 15136' joined, marking health alive
2016/06/23 07:46:11 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:11 [INFO] consul: shutting down server
2016/06/23 07:46:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:11 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:11 [DEBUG] memberlist: Failed UDP ping: Node 15140 (timeout reached)
2016/06/23 07:46:11 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:46:11 [INFO] consul: shutting down server
2016/06/23 07:46:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:11 [INFO] memberlist: Suspect Node 15140 has failed, no acks received
2016/06/23 07:46:11 [ERR] consul: 'Node 15140' and 'Node 15136' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:46:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:11 [INFO] memberlist: Marking Node 15140 as failed, suspect timeout reached
2016/06/23 07:46:11 [INFO] serf: EventMemberFailed: Node 15140 127.0.0.1
2016/06/23 07:46:11 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:46:11 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ForwardLeader (3.63s)
=== RUN   TestCatalogRegister_ForwardDC
2016/06/23 07:46:12 [INFO] raft: Node at 127.0.0.1:15145 [Follower] entering Follower state
2016/06/23 07:46:12 [INFO] serf: EventMemberJoin: Node 15144 127.0.0.1
2016/06/23 07:46:12 [INFO] consul: adding LAN server Node 15144 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/06/23 07:46:12 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/06/23 07:46:12 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/06/23 07:46:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:12 [INFO] raft: Node at 127.0.0.1:15145 [Candidate] entering Candidate state
2016/06/23 07:46:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:13 [DEBUG] raft: Vote granted from 127.0.0.1:15145. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:15145 [Leader] entering Leader state
2016/06/23 07:46:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:13 [INFO] consul: New leader elected: Node 15144
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:15149 [Follower] entering Follower state
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 15148 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding LAN server Node 15148 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/06/23 07:46:13 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15147
2016/06/23 07:46:13 [DEBUG] memberlist: TCP connection from=127.0.0.1:46376
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 15148.dc2 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding WAN server Node 15148.dc2 (Addr: 127.0.0.1:15149) (DC: dc2)
2016/06/23 07:46:13 [INFO] serf: EventMemberJoin: Node 15144.dc1 127.0.0.1
2016/06/23 07:46:13 [INFO] consul: adding WAN server Node 15144.dc1 (Addr: 127.0.0.1:15145) (DC: dc1)
2016/06/23 07:46:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:15149 [Candidate] entering Candidate state
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [DEBUG] serf: messageJoinType: Node 15148.dc2
2016/06/23 07:46:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:13 [DEBUG] raft: Node 127.0.0.1:15145 updated peer set (2): [127.0.0.1:15145]
2016/06/23 07:46:13 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:13 [INFO] consul: member 'Node 15144' joined, marking health alive
2016/06/23 07:46:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:13 [DEBUG] raft: Vote granted from 127.0.0.1:15149. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:13 [INFO] raft: Node at 127.0.0.1:15149 [Leader] entering Leader state
2016/06/23 07:46:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:13 [INFO] consul: New leader elected: Node 15148
2016/06/23 07:46:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:14 [DEBUG] raft: Node 127.0.0.1:15149 updated peer set (2): [127.0.0.1:15149]
2016/06/23 07:46:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:14 [INFO] consul: member 'Node 15148' joined, marking health alive
2016/06/23 07:46:14 [INFO] consul: shutting down server
2016/06/23 07:46:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:14 [DEBUG] memberlist: Potential blocking operation. Last command took 19.394594ms
2016/06/23 07:46:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:15 [INFO] consul: shutting down server
2016/06/23 07:46:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:15 [DEBUG] memberlist: Failed UDP ping: Node 15148.dc2 (timeout reached)
2016/06/23 07:46:15 [INFO] memberlist: Suspect Node 15148.dc2 has failed, no acks received
2016/06/23 07:46:15 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/06/23 07:46:15 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_ForwardDC (3.50s)
=== RUN   TestCatalogDeregister
2016/06/23 07:46:15 [INFO] memberlist: Marking Node 15148.dc2 as failed, suspect timeout reached
2016/06/23 07:46:15 [INFO] serf: EventMemberFailed: Node 15148.dc2 127.0.0.1
2016/06/23 07:46:15 [INFO] raft: Node at 127.0.0.1:15153 [Follower] entering Follower state
2016/06/23 07:46:15 [INFO] serf: EventMemberJoin: Node 15152 127.0.0.1
2016/06/23 07:46:15 [INFO] consul: adding LAN server Node 15152 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/06/23 07:46:15 [INFO] serf: EventMemberJoin: Node 15152.dc1 127.0.0.1
2016/06/23 07:46:15 [INFO] consul: adding WAN server Node 15152.dc1 (Addr: 127.0.0.1:15153) (DC: dc1)
2016/06/23 07:46:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:15 [INFO] raft: Node at 127.0.0.1:15153 [Candidate] entering Candidate state
2016/06/23 07:46:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:16 [DEBUG] raft: Vote granted from 127.0.0.1:15153. Tally: 1
2016/06/23 07:46:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:16 [INFO] raft: Node at 127.0.0.1:15153 [Leader] entering Leader state
2016/06/23 07:46:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:16 [INFO] consul: New leader elected: Node 15152
2016/06/23 07:46:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:17 [DEBUG] raft: Node 127.0.0.1:15153 updated peer set (2): [127.0.0.1:15153]
2016/06/23 07:46:17 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:17 [INFO] consul: member 'Node 15152' joined, marking health alive
2016/06/23 07:46:17 [INFO] consul: shutting down server
2016/06/23 07:46:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:17 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogDeregister (2.82s)
=== RUN   TestCatalogListDatacenters
2016/06/23 07:46:18 [INFO] raft: Node at 127.0.0.1:15157 [Follower] entering Follower state
2016/06/23 07:46:18 [INFO] serf: EventMemberJoin: Node 15156 127.0.0.1
2016/06/23 07:46:18 [INFO] consul: adding LAN server Node 15156 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/06/23 07:46:18 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/06/23 07:46:18 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/06/23 07:46:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:18 [INFO] raft: Node at 127.0.0.1:15157 [Candidate] entering Candidate state
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:15161 [Follower] entering Follower state
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 15160 127.0.0.1
2016/06/23 07:46:19 [INFO] consul: adding LAN server Node 15160 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/06/23 07:46:19 [DEBUG] memberlist: TCP connection from=127.0.0.1:37220
2016/06/23 07:46:19 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/06/23 07:46:19 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15159
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 15160.dc2 127.0.0.1
2016/06/23 07:46:19 [INFO] consul: adding WAN server Node 15160.dc2 (Addr: 127.0.0.1:15161) (DC: dc2)
2016/06/23 07:46:19 [INFO] serf: EventMemberJoin: Node 15156.dc1 127.0.0.1
2016/06/23 07:46:19 [INFO] consul: adding WAN server Node 15156.dc1 (Addr: 127.0.0.1:15157) (DC: dc1)
2016/06/23 07:46:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:15161 [Candidate] entering Candidate state
2016/06/23 07:46:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:19 [DEBUG] raft: Vote granted from 127.0.0.1:15157. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:15157 [Leader] entering Leader state
2016/06/23 07:46:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:19 [INFO] consul: New leader elected: Node 15156
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] serf: messageJoinType: Node 15160.dc2
2016/06/23 07:46:19 [DEBUG] raft: Node 127.0.0.1:15157 updated peer set (2): [127.0.0.1:15157]
2016/06/23 07:46:19 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:19 [INFO] consul: member 'Node 15156' joined, marking health alive
2016/06/23 07:46:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:19 [DEBUG] raft: Vote granted from 127.0.0.1:15161. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:19 [INFO] raft: Node at 127.0.0.1:15161 [Leader] entering Leader state
2016/06/23 07:46:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:19 [INFO] consul: New leader elected: Node 15160
2016/06/23 07:46:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:20 [INFO] consul: shutting down server
2016/06/23 07:46:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:20 [DEBUG] raft: Node 127.0.0.1:15161 updated peer set (2): [127.0.0.1:15161]
2016/06/23 07:46:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:20 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:20 [INFO] consul: member 'Node 15160' joined, marking health alive
2016/06/23 07:46:20 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:46:20 [ERR] consul: failed to reconcile member: {Node 15160 127.0.0.1 15162 map[build: port:15161 bootstrap:1 role:consul dc:dc2 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:20 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:46:20 [INFO] consul: shutting down server
2016/06/23 07:46:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:20 [DEBUG] memberlist: Failed UDP ping: Node 15160.dc2 (timeout reached)
2016/06/23 07:46:20 [INFO] memberlist: Suspect Node 15160.dc2 has failed, no acks received
2016/06/23 07:46:20 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListDatacenters (2.48s)
=== RUN   TestCatalogListDatacenters_DistanceSort
2016/06/23 07:46:20 [INFO] memberlist: Marking Node 15160.dc2 as failed, suspect timeout reached
2016/06/23 07:46:20 [INFO] serf: EventMemberFailed: Node 15160.dc2 127.0.0.1
2016/06/23 07:46:21 [INFO] raft: Node at 127.0.0.1:15165 [Follower] entering Follower state
2016/06/23 07:46:21 [INFO] serf: EventMemberJoin: Node 15164 127.0.0.1
2016/06/23 07:46:21 [INFO] consul: adding LAN server Node 15164 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/06/23 07:46:21 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/06/23 07:46:21 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/06/23 07:46:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:21 [INFO] raft: Node at 127.0.0.1:15165 [Candidate] entering Candidate state
2016/06/23 07:46:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:22 [DEBUG] raft: Vote granted from 127.0.0.1:15165. Tally: 1
2016/06/23 07:46:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:22 [INFO] raft: Node at 127.0.0.1:15165 [Leader] entering Leader state
2016/06/23 07:46:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:22 [INFO] consul: New leader elected: Node 15164
2016/06/23 07:46:22 [INFO] raft: Node at 127.0.0.1:15169 [Follower] entering Follower state
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15168 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding LAN server Node 15168 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/06/23 07:46:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:22 [INFO] raft: Node at 127.0.0.1:15169 [Candidate] entering Candidate state
2016/06/23 07:46:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:22 [DEBUG] raft: Node 127.0.0.1:15165 updated peer set (2): [127.0.0.1:15165]
2016/06/23 07:46:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:22 [INFO] consul: member 'Node 15164' joined, marking health alive
2016/06/23 07:46:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:22 [DEBUG] raft: Vote granted from 127.0.0.1:15169. Tally: 1
2016/06/23 07:46:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:22 [INFO] raft: Node at 127.0.0.1:15169 [Leader] entering Leader state
2016/06/23 07:46:22 [INFO] raft: Node at 127.0.0.1:15173 [Follower] entering Follower state
2016/06/23 07:46:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:22 [INFO] consul: New leader elected: Node 15168
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15172 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding LAN server Node 15172 (Addr: 127.0.0.1:15173) (DC: acdc)
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/06/23 07:46:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:32888
2016/06/23 07:46:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/06/23 07:46:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15167
2016/06/23 07:46:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:32890
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15168.dc2 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15168.dc2 (Addr: 127.0.0.1:15169) (DC: dc2)
2016/06/23 07:46:22 [INFO] serf: EventMemberJoin: Node 15164.dc1 127.0.0.1
2016/06/23 07:46:22 [INFO] consul: adding WAN server Node 15164.dc1 (Addr: 127.0.0.1:15165) (DC: dc1)
2016/06/23 07:46:22 [INFO] consul: shutting down server
2016/06/23 07:46:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:23 [INFO] raft: Node at 127.0.0.1:15173 [Candidate] entering Candidate state
2016/06/23 07:46:23 [INFO] serf: EventMemberJoin: Node 15172.acdc 127.0.0.1
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [INFO] consul: adding WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15168.dc2
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] serf: messageJoinType: Node 15172.acdc
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/06/23 07:46:23 [DEBUG] raft: Node 127.0.0.1:15169 updated peer set (2): [127.0.0.1:15169]
2016/06/23 07:46:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:23 [INFO] consul: member 'Node 15168' joined, marking health alive
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/06/23 07:46:23 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:23 [INFO] consul: shutting down server
2016/06/23 07:46:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15172.acdc (timeout reached)
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15172.acdc has failed, no acks received
2016/06/23 07:46:23 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/06/23 07:46:23 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/06/23 07:46:23 [INFO] consul: removing WAN server Node 15172.acdc (Addr: 127.0.0.1:15173) (DC: acdc)
2016/06/23 07:46:23 [INFO] memberlist: Marking Node 15172.acdc as failed, suspect timeout reached
2016/06/23 07:46:23 [INFO] serf: EventMemberFailed: Node 15172.acdc 127.0.0.1
2016/06/23 07:46:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/06/23 07:46:23 [INFO] consul: shutting down server
2016/06/23 07:46:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
2016/06/23 07:46:23 [DEBUG] memberlist: Failed UDP ping: Node 15168.dc2 (timeout reached)
2016/06/23 07:46:23 [INFO] memberlist: Suspect Node 15168.dc2 has failed, no acks received
2016/06/23 07:46:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:23 [INFO] memberlist: Marking Node 15168.dc2 as failed, suspect timeout reached
2016/06/23 07:46:23 [INFO] serf: EventMemberFailed: Node 15168.dc2 127.0.0.1
--- PASS: TestCatalogListDatacenters_DistanceSort (3.39s)
=== RUN   TestCatalogListNodes
2016/06/23 07:46:24 [INFO] raft: Node at 127.0.0.1:15177 [Follower] entering Follower state
2016/06/23 07:46:24 [INFO] serf: EventMemberJoin: Node 15176 127.0.0.1
2016/06/23 07:46:24 [INFO] consul: adding LAN server Node 15176 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/06/23 07:46:24 [INFO] serf: EventMemberJoin: Node 15176.dc1 127.0.0.1
2016/06/23 07:46:24 [INFO] consul: adding WAN server Node 15176.dc1 (Addr: 127.0.0.1:15177) (DC: dc1)
2016/06/23 07:46:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:24 [INFO] raft: Node at 127.0.0.1:15177 [Candidate] entering Candidate state
2016/06/23 07:46:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:25 [DEBUG] raft: Vote granted from 127.0.0.1:15177. Tally: 1
2016/06/23 07:46:25 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:25 [INFO] raft: Node at 127.0.0.1:15177 [Leader] entering Leader state
2016/06/23 07:46:25 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:25 [INFO] consul: New leader elected: Node 15176
2016/06/23 07:46:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:25 [DEBUG] raft: Node 127.0.0.1:15177 updated peer set (2): [127.0.0.1:15177]
2016/06/23 07:46:25 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:25 [INFO] consul: member 'Node 15176' joined, marking health alive
2016/06/23 07:46:26 [INFO] consul: shutting down server
2016/06/23 07:46:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:26 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:46:26 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListNodes (2.46s)
=== RUN   TestCatalogListNodes_StaleRaad
2016/06/23 07:46:26 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/06/23 07:46:26 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/06/23 07:46:26 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/06/23 07:46:26 [INFO] serf: EventMemberJoin: Node 15180.dc1 127.0.0.1
2016/06/23 07:46:26 [INFO] consul: adding WAN server Node 15180.dc1 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/06/23 07:46:27 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:27 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:27 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:27 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:27 [INFO] raft: Node at 127.0.0.1:15181 [Leader] entering Leader state
2016/06/23 07:46:27 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:27 [INFO] consul: New leader elected: Node 15180
2016/06/23 07:46:27 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/06/23 07:46:27 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/06/23 07:46:27 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/06/23 07:46:27 [INFO] serf: EventMemberJoin: Node 15184.dc1 127.0.0.1
2016/06/23 07:46:27 [INFO] consul: adding WAN server Node 15184.dc1 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/06/23 07:46:27 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15182
2016/06/23 07:46:27 [DEBUG] memberlist: TCP connection from=127.0.0.1:46512
2016/06/23 07:46:27 [INFO] serf: EventMemberJoin: Node 15184 127.0.0.1
2016/06/23 07:46:27 [INFO] serf: EventMemberJoin: Node 15180 127.0.0.1
2016/06/23 07:46:27 [INFO] consul: adding LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/06/23 07:46:27 [INFO] consul: adding LAN server Node 15180 (Addr: 127.0.0.1:15181) (DC: dc1)
2016/06/23 07:46:27 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:27 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:27 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15181]
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:27 [DEBUG] memberlist: Potential blocking operation. Last command took 11.692358ms
2016/06/23 07:46:27 [DEBUG] serf: messageJoinType: Node 15184
2016/06/23 07:46:28 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:28 [INFO] consul: member 'Node 15180' joined, marking health alive
2016/06/23 07:46:28 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15185 127.0.0.1:15181]
2016/06/23 07:46:28 [INFO] raft: Added peer 127.0.0.1:15185, starting replication
2016/06/23 07:46:28 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:53392
2016/06/23 07:46:28 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:53394
2016/06/23 07:46:28 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/06/23 07:46:28 [DEBUG] raft: Failed to contact 127.0.0.1:15185 in 256.836194ms
2016/06/23 07:46:28 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:46:28 [INFO] raft: Node at 127.0.0.1:15181 [Follower] entering Follower state
2016/06/23 07:46:28 [WARN] raft: AppendEntries to 127.0.0.1:15185 rejected, sending older logs (next: 1)
2016/06/23 07:46:28 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:46:28 [ERR] consul: failed to reconcile member: {Node 15184 127.0.0.1 15186 map[vsn_min:1 vsn_max:3 build: port:15185 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:28 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:46:28 [INFO] consul: cluster leadership lost
2016/06/23 07:46:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:28 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:28 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:53398
2016/06/23 07:46:29 [DEBUG] raft: Node 127.0.0.1:15185 updated peer set (2): [127.0.0.1:15181]
2016/06/23 07:46:29 [INFO] raft: pipelining replication to peer 127.0.0.1:15185
2016/06/23 07:46:29 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15185
2016/06/23 07:46:29 [DEBUG] raft-net: 127.0.0.1:15185 accepted connection from: 127.0.0.1:53400
2016/06/23 07:46:29 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:29 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:29 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:29 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:29 [ERR] raft: peer 127.0.0.1:15185 has newer term, stopping replication
2016/06/23 07:46:29 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:29 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:29 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:29 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:30 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:30 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:30 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:30 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:30 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:30 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:30 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:30 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:31 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:31 [DEBUG] raft-net: 127.0.0.1:15181 accepted connection from: 127.0.0.1:58724
2016/06/23 07:46:31 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:31 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:31 [INFO] raft: Duplicate RequestVote for same term: 6
2016/06/23 07:46:31 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:31 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:31 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:31 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:31 [INFO] raft: Duplicate RequestVote for same term: 6
2016/06/23 07:46:31 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:31 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:31 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:31 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:32 [INFO] raft: Node at 127.0.0.1:15185 [Follower] entering Follower state
2016/06/23 07:46:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:32 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:32 [DEBUG] memberlist: Potential blocking operation. Last command took 10.147978ms
2016/06/23 07:46:32 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:32 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:46:32 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:32 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:32 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:32 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:32 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:32 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:46:32 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:32 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:33 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:33 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:46:33 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:33 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:33 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:33 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:46:33 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:33 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:33 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:33 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:33 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:46:33 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:33 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:46:33 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:33 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:33 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:33 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:34 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:46:34 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:34 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:34 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:34 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:46:34 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:34 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:34 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:34 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:34 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:34 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:34 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:34 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:46:34 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:34 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:46:34 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:34 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:34 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:34 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:36 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:36 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:36 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:46:36 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:36 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:36 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:46:36 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:36 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:36 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:36 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:36 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:36 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:46:36 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:37 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:37 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:46:37 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:37 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:37 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:37 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:37 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:37 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:37 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:37 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:37 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:46:37 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:46:37 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:37 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:37 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:37 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:37 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:38 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:38 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:38 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:46:38 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:38 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:38 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:38 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:38 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:46:38 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:38 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:38 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:38 [DEBUG] raft: Vote granted from 127.0.0.1:15185. Tally: 1
2016/06/23 07:46:38 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:46:38 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:38 [INFO] raft: Node at 127.0.0.1:15185 [Candidate] entering Candidate state
2016/06/23 07:46:38 [INFO] consul: shutting down server
2016/06/23 07:46:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:38 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:38 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:46:38 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:39 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:39 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:39 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/06/23 07:46:39 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/06/23 07:46:39 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/06/23 07:46:39 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/06/23 07:46:39 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:39 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/06/23 07:46:39 [INFO] memberlist: Marking Node 15184 as failed, suspect timeout reached
2016/06/23 07:46:39 [INFO] serf: EventMemberFailed: Node 15184 127.0.0.1
2016/06/23 07:46:39 [INFO] consul: removing LAN server Node 15184 (Addr: 127.0.0.1:15185) (DC: dc1)
2016/06/23 07:46:39 [DEBUG] memberlist: Failed UDP ping: Node 15184 (timeout reached)
2016/06/23 07:46:39 [INFO] memberlist: Suspect Node 15184 has failed, no acks received
2016/06/23 07:46:39 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:39 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:39 [DEBUG] raft: Vote granted from 127.0.0.1:15181. Tally: 1
2016/06/23 07:46:39 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:46:39 [INFO] consul: shutting down server
2016/06/23 07:46:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:39 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:39 [INFO] raft: Node at 127.0.0.1:15181 [Candidate] entering Candidate state
2016/06/23 07:46:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:39 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:39 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15185: EOF
2016/06/23 07:46:40 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_StaleRaad (13.90s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_ConsistentRead_Fail
2016/06/23 07:46:41 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15188.dc1 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding WAN server Node 15188.dc1 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/06/23 07:46:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:41 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:41 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:41 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/06/23 07:46:41 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:41 [INFO] raft: Node at 127.0.0.1:15189 [Leader] entering Leader state
2016/06/23 07:46:41 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:41 [INFO] consul: New leader elected: Node 15188
2016/06/23 07:46:41 [INFO] raft: Node at 127.0.0.1:15193 [Follower] entering Follower state
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15192.dc1 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding WAN server Node 15192.dc1 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/06/23 07:46:41 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15190
2016/06/23 07:46:41 [DEBUG] memberlist: TCP connection from=127.0.0.1:36616
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15192 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding LAN server Node 15192 (Addr: 127.0.0.1:15193) (DC: dc1)
2016/06/23 07:46:41 [INFO] serf: EventMemberJoin: Node 15188 127.0.0.1
2016/06/23 07:46:41 [INFO] consul: adding LAN server Node 15188 (Addr: 127.0.0.1:15189) (DC: dc1)
2016/06/23 07:46:41 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:41 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15189]
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:41 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:41 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:42 [DEBUG] serf: messageJoinType: Node 15192
2016/06/23 07:46:42 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:42 [INFO] consul: member 'Node 15188' joined, marking health alive
2016/06/23 07:46:42 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15193 127.0.0.1:15189]
2016/06/23 07:46:42 [INFO] raft: Added peer 127.0.0.1:15193, starting replication
2016/06/23 07:46:42 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:54808
2016/06/23 07:46:42 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:54810
2016/06/23 07:46:42 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/06/23 07:46:42 [DEBUG] raft: Failed to contact 127.0.0.1:15193 in 163.702677ms
2016/06/23 07:46:42 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/06/23 07:46:42 [INFO] consul: cluster leadership lost
2016/06/23 07:46:42 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:46:42 [ERR] consul: failed to reconcile member: {Node 15192 127.0.0.1 15194 map[build: port:15193 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:42 [WARN] raft: AppendEntries to 127.0.0.1:15193 rejected, sending older logs (next: 1)
2016/06/23 07:46:42 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:46:42 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:46:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:42 [DEBUG] raft-net: 127.0.0.1:15193 accepted connection from: 127.0.0.1:54814
2016/06/23 07:46:42 [DEBUG] raft: Node 127.0.0.1:15193 updated peer set (2): [127.0.0.1:15189]
2016/06/23 07:46:42 [INFO] raft: pipelining replication to peer 127.0.0.1:15193
2016/06/23 07:46:42 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15193
2016/06/23 07:46:42 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:42 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/06/23 07:46:42 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:42 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:43 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:43 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/06/23 07:46:43 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:43 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:43 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:43 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/06/23 07:46:43 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:43 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:44 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:44 [DEBUG] raft: Vote granted from 127.0.0.1:15189. Tally: 1
2016/06/23 07:46:44 [DEBUG] raft: Vote granted from 127.0.0.1:15193. Tally: 2
2016/06/23 07:46:44 [INFO] raft: Election won. Tally: 2
2016/06/23 07:46:44 [INFO] raft: Node at 127.0.0.1:15189 [Leader] entering Leader state
2016/06/23 07:46:44 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:44 [WARN] raft: Failed to get previous log: 4 log not found (last: 3)
2016/06/23 07:46:44 [WARN] raft: AppendEntries to 127.0.0.1:15193 rejected, sending older logs (next: 4)
2016/06/23 07:46:44 [INFO] consul: New leader elected: Node 15188
2016/06/23 07:46:44 [INFO] consul: shutting down server
2016/06/23 07:46:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:44 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:44 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/06/23 07:46:44 [INFO] raft: pipelining replication to peer 127.0.0.1:15193
2016/06/23 07:46:44 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:44 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15193
2016/06/23 07:46:44 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:44 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: EOF
2016/06/23 07:46:44 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:44 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/06/23 07:46:44 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:44 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:44 [DEBUG] memberlist: Failed UDP ping: Node 15192 (timeout reached)
2016/06/23 07:46:44 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:44 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:45 [INFO] memberlist: Suspect Node 15192 has failed, no acks received
2016/06/23 07:46:45 [DEBUG] raft: Failed to contact 127.0.0.1:15193 in 130.764669ms
2016/06/23 07:46:45 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:46:45 [INFO] raft: Node at 127.0.0.1:15189 [Follower] entering Follower state
2016/06/23 07:46:45 [INFO] consul: cluster leadership lost
2016/06/23 07:46:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:46:45 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:46:45 [INFO] consul: shutting down server
2016/06/23 07:46:45 [INFO] consul: shutting down server
2016/06/23 07:46:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:45 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:45 [ERR] raft: Failed to heartbeat to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:45 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:45 [INFO] raft: Node at 127.0.0.1:15189 [Candidate] entering Candidate state
2016/06/23 07:46:45 [INFO] memberlist: Marking Node 15192 as failed, suspect timeout reached
2016/06/23 07:46:45 [INFO] serf: EventMemberFailed: Node 15192 127.0.0.1
2016/06/23 07:46:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:45 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15193: dial tcp 127.0.0.1:15193: getsockopt: connection refused
2016/06/23 07:46:45 [DEBUG] raft: Votes needed: 2
--- PASS: TestCatalogListNodes_ConsistentRead_Fail (5.35s)
=== RUN   TestCatalogListNodes_ConsistentRead
2016/06/23 07:46:46 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/06/23 07:46:46 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15196.dc1 127.0.0.1
2016/06/23 07:46:46 [INFO] consul: adding WAN server Node 15196.dc1 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/06/23 07:46:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:46 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:46 [INFO] raft: Node at 127.0.0.1:15201 [Follower] entering Follower state
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/06/23 07:46:46 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15200.dc1 127.0.0.1
2016/06/23 07:46:46 [INFO] consul: adding WAN server Node 15200.dc1 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/06/23 07:46:46 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15198
2016/06/23 07:46:46 [DEBUG] memberlist: TCP connection from=127.0.0.1:39326
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15200 127.0.0.1
2016/06/23 07:46:46 [INFO] serf: EventMemberJoin: Node 15196 127.0.0.1
2016/06/23 07:46:46 [INFO] consul: adding LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/06/23 07:46:46 [INFO] consul: adding LAN server Node 15196 (Addr: 127.0.0.1:15197) (DC: dc1)
2016/06/23 07:46:46 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:46:46 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:46:46 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:46 [INFO] raft: Election won. Tally: 1
2016/06/23 07:46:46 [INFO] raft: Node at 127.0.0.1:15197 [Leader] entering Leader state
2016/06/23 07:46:46 [INFO] consul: cluster leadership acquired
2016/06/23 07:46:46 [INFO] consul: New leader elected: Node 15196
2016/06/23 07:46:46 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:46 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:46 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:46 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:46 [INFO] consul: New leader elected: Node 15196
2016/06/23 07:46:47 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] serf: messageJoinType: Node 15200
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:46:47 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15197]
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:46:47 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:46:47 [INFO] consul: member 'Node 15196' joined, marking health alive
2016/06/23 07:46:47 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15201 127.0.0.1:15197]
2016/06/23 07:46:47 [INFO] raft: Added peer 127.0.0.1:15201, starting replication
2016/06/23 07:46:47 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:36822
2016/06/23 07:46:47 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:36824
2016/06/23 07:46:47 [DEBUG] raft: Failed to contact 127.0.0.1:15201 in 196.80469ms
2016/06/23 07:46:47 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:46:47 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/06/23 07:46:47 [INFO] raft: Node at 127.0.0.1:15197 [Follower] entering Follower state
2016/06/23 07:46:47 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:46:47 [INFO] consul: cluster leadership lost
2016/06/23 07:46:47 [ERR] consul: failed to reconcile member: {Node 15200 127.0.0.1 15202 map[vsn_max:3 build: port:15201 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:46:47 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:46:47 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:46:47 [WARN] raft: AppendEntries to 127.0.0.1:15201 rejected, sending older logs (next: 1)
2016/06/23 07:46:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:47 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:48 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:36828
2016/06/23 07:46:48 [DEBUG] raft: Node 127.0.0.1:15201 updated peer set (2): [127.0.0.1:15197]
2016/06/23 07:46:48 [INFO] raft: pipelining replication to peer 127.0.0.1:15201
2016/06/23 07:46:48 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15201
2016/06/23 07:46:48 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:36830
2016/06/23 07:46:48 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:48 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:48 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:48 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:48 [DEBUG] raft-net: 127.0.0.1:15201 accepted connection from: 127.0.0.1:36832
2016/06/23 07:46:48 [ERR] raft: peer 127.0.0.1:15201 has newer term, stopping replication
2016/06/23 07:46:49 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:49 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:49 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:49 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:49 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:49 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:50 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:50 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:50 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:50 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:50 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:50 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:51 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:51 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:51 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:51 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:51 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:51 [DEBUG] raft-net: 127.0.0.1:15197 accepted connection from: 127.0.0.1:48326
2016/06/23 07:46:51 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:51 [INFO] raft: Duplicate RequestVote for same term: 7
2016/06/23 07:46:51 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:51 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:51 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:51 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:51 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:51 [INFO] raft: Duplicate RequestVote for same term: 7
2016/06/23 07:46:51 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:51 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:52 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:52 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:52 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:46:52 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:52 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:52 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:52 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:46:52 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:52 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:52 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:52 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:52 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:52 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:46:53 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:53 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:53 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:53 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:46:53 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:53 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:53 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:53 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:53 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:53 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:46:53 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:53 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:53 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:53 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:46:53 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:53 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:53 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:54 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:54 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:54 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:46:54 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:54 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:54 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:54 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:46:54 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:54 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:54 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:54 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:54 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:54 [INFO] raft: Node at 127.0.0.1:15201 [Follower] entering Follower state
2016/06/23 07:46:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:46:54 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:55 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:55 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:46:55 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:55 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:55 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:55 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:55 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:46:55 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:55 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:55 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:55 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:46:55 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:55 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:55 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:46:55 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:55 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:55 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:56 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:56 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:46:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:56 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:56 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:46:56 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:56 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:56 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:46:56 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:56 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:56 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:56 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:46:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:56 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:57 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:57 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:57 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:46:57 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:57 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:57 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:57 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:57 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:46:57 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:57 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:58 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:58 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:46:58 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:58 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:58 [DEBUG] raft: Vote granted from 127.0.0.1:15201. Tally: 1
2016/06/23 07:46:58 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:46:58 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:58 [INFO] raft: Node at 127.0.0.1:15201 [Candidate] entering Candidate state
2016/06/23 07:46:58 [INFO] consul: shutting down server
2016/06/23 07:46:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:58 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/06/23 07:46:58 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/06/23 07:46:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:58 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/06/23 07:46:58 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:58 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15201: EOF
2016/06/23 07:46:58 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/06/23 07:46:58 [INFO] memberlist: Marking Node 15200 as failed, suspect timeout reached
2016/06/23 07:46:58 [INFO] serf: EventMemberFailed: Node 15200 127.0.0.1
2016/06/23 07:46:58 [INFO] consul: removing LAN server Node 15200 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/06/23 07:46:58 [DEBUG] memberlist: Failed UDP ping: Node 15200 (timeout reached)
2016/06/23 07:46:58 [INFO] memberlist: Suspect Node 15200 has failed, no acks received
2016/06/23 07:46:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:58 [DEBUG] raft: Vote granted from 127.0.0.1:15197. Tally: 1
2016/06/23 07:46:58 [INFO] raft: Duplicate RequestVote for same term: 19
2016/06/23 07:46:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:46:58 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:46:58 [INFO] raft: Node at 127.0.0.1:15197 [Candidate] entering Candidate state
2016/06/23 07:46:58 [INFO] consul: shutting down server
2016/06/23 07:46:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:58 [WARN] serf: Shutdown without a Leave
2016/06/23 07:46:59 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:46:59 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15201: EOF
2016/06/23 07:46:59 [DEBUG] raft: Votes needed: 2
--- FAIL: TestCatalogListNodes_ConsistentRead (13.78s)
	wait.go:41: failed to find leader: No cluster leader
=== RUN   TestCatalogListNodes_DistanceSort
2016/06/23 07:47:00 [INFO] raft: Node at 127.0.0.1:15205 [Follower] entering Follower state
2016/06/23 07:47:00 [INFO] serf: EventMemberJoin: Node 15204 127.0.0.1
2016/06/23 07:47:00 [INFO] consul: adding LAN server Node 15204 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/06/23 07:47:00 [INFO] serf: EventMemberJoin: Node 15204.dc1 127.0.0.1
2016/06/23 07:47:00 [INFO] consul: adding WAN server Node 15204.dc1 (Addr: 127.0.0.1:15205) (DC: dc1)
2016/06/23 07:47:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:00 [INFO] raft: Node at 127.0.0.1:15205 [Candidate] entering Candidate state
2016/06/23 07:47:00 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:00 [DEBUG] raft: Vote granted from 127.0.0.1:15205. Tally: 1
2016/06/23 07:47:00 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:00 [INFO] raft: Node at 127.0.0.1:15205 [Leader] entering Leader state
2016/06/23 07:47:00 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:00 [INFO] consul: New leader elected: Node 15204
2016/06/23 07:47:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:01 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15205]
2016/06/23 07:47:01 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:01 [INFO] consul: member 'Node 15204' joined, marking health alive
2016/06/23 07:47:01 [INFO] consul: shutting down server
2016/06/23 07:47:01 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:01 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:01 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalogListNodes_DistanceSort (2.23s)
=== RUN   TestCatalogListServices
2016/06/23 07:47:02 [INFO] raft: Node at 127.0.0.1:15209 [Follower] entering Follower state
2016/06/23 07:47:02 [INFO] serf: EventMemberJoin: Node 15208 127.0.0.1
2016/06/23 07:47:02 [INFO] consul: adding LAN server Node 15208 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/06/23 07:47:02 [INFO] serf: EventMemberJoin: Node 15208.dc1 127.0.0.1
2016/06/23 07:47:02 [INFO] consul: adding WAN server Node 15208.dc1 (Addr: 127.0.0.1:15209) (DC: dc1)
2016/06/23 07:47:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:02 [INFO] raft: Node at 127.0.0.1:15209 [Candidate] entering Candidate state
2016/06/23 07:47:03 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:03 [DEBUG] raft: Vote granted from 127.0.0.1:15209. Tally: 1
2016/06/23 07:47:03 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:03 [INFO] raft: Node at 127.0.0.1:15209 [Leader] entering Leader state
2016/06/23 07:47:03 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:03 [INFO] consul: New leader elected: Node 15208
2016/06/23 07:47:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:03 [DEBUG] raft: Node 127.0.0.1:15209 updated peer set (2): [127.0.0.1:15209]
2016/06/23 07:47:03 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:03 [INFO] consul: member 'Node 15208' joined, marking health alive
2016/06/23 07:47:03 [INFO] consul: shutting down server
2016/06/23 07:47:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:03 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:04 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServices (2.39s)
=== RUN   TestCatalogListServices_Blocking
2016/06/23 07:47:05 [INFO] raft: Node at 127.0.0.1:15213 [Follower] entering Follower state
2016/06/23 07:47:05 [INFO] serf: EventMemberJoin: Node 15212 127.0.0.1
2016/06/23 07:47:05 [INFO] consul: adding LAN server Node 15212 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/06/23 07:47:05 [INFO] serf: EventMemberJoin: Node 15212.dc1 127.0.0.1
2016/06/23 07:47:05 [INFO] consul: adding WAN server Node 15212.dc1 (Addr: 127.0.0.1:15213) (DC: dc1)
2016/06/23 07:47:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:05 [INFO] raft: Node at 127.0.0.1:15213 [Candidate] entering Candidate state
2016/06/23 07:47:05 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:05 [DEBUG] raft: Vote granted from 127.0.0.1:15213. Tally: 1
2016/06/23 07:47:05 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:05 [INFO] raft: Node at 127.0.0.1:15213 [Leader] entering Leader state
2016/06/23 07:47:05 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:05 [INFO] consul: New leader elected: Node 15212
2016/06/23 07:47:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:06 [DEBUG] raft: Node 127.0.0.1:15213 updated peer set (2): [127.0.0.1:15213]
2016/06/23 07:47:06 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:06 [INFO] consul: member 'Node 15212' joined, marking health alive
2016/06/23 07:47:06 [INFO] consul: shutting down server
2016/06/23 07:47:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:07 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServices_Blocking (3.03s)
=== RUN   TestCatalogListServices_Timeout
2016/06/23 07:47:07 [INFO] raft: Node at 127.0.0.1:15217 [Follower] entering Follower state
2016/06/23 07:47:07 [INFO] serf: EventMemberJoin: Node 15216 127.0.0.1
2016/06/23 07:47:07 [INFO] consul: adding LAN server Node 15216 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/06/23 07:47:07 [INFO] serf: EventMemberJoin: Node 15216.dc1 127.0.0.1
2016/06/23 07:47:07 [INFO] consul: adding WAN server Node 15216.dc1 (Addr: 127.0.0.1:15217) (DC: dc1)
2016/06/23 07:47:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:07 [INFO] raft: Node at 127.0.0.1:15217 [Candidate] entering Candidate state
2016/06/23 07:47:08 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:08 [DEBUG] raft: Vote granted from 127.0.0.1:15217. Tally: 1
2016/06/23 07:47:08 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:08 [INFO] raft: Node at 127.0.0.1:15217 [Leader] entering Leader state
2016/06/23 07:47:08 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:08 [INFO] consul: New leader elected: Node 15216
2016/06/23 07:47:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:09 [DEBUG] raft: Node 127.0.0.1:15217 updated peer set (2): [127.0.0.1:15217]
2016/06/23 07:47:09 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:09 [INFO] consul: member 'Node 15216' joined, marking health alive
2016/06/23 07:47:09 [INFO] consul: shutting down server
2016/06/23 07:47:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:09 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServices_Timeout (2.53s)
=== RUN   TestCatalogListServices_Stale
2016/06/23 07:47:10 [INFO] raft: Node at 127.0.0.1:15221 [Follower] entering Follower state
2016/06/23 07:47:10 [INFO] serf: EventMemberJoin: Node 15220 127.0.0.1
2016/06/23 07:47:10 [INFO] consul: adding LAN server Node 15220 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/06/23 07:47:10 [INFO] serf: EventMemberJoin: Node 15220.dc1 127.0.0.1
2016/06/23 07:47:10 [INFO] consul: adding WAN server Node 15220.dc1 (Addr: 127.0.0.1:15221) (DC: dc1)
2016/06/23 07:47:10 [INFO] consul: shutting down server
2016/06/23 07:47:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:10 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:10 [INFO] raft: Node at 127.0.0.1:15221 [Candidate] entering Candidate state
2016/06/23 07:47:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:10 [DEBUG] raft: Votes needed: 1
--- PASS: TestCatalogListServices_Stale (1.31s)
=== RUN   TestCatalogListServiceNodes
2016/06/23 07:47:11 [INFO] raft: Node at 127.0.0.1:15225 [Follower] entering Follower state
2016/06/23 07:47:11 [INFO] serf: EventMemberJoin: Node 15224 127.0.0.1
2016/06/23 07:47:11 [INFO] consul: adding LAN server Node 15224 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/06/23 07:47:11 [INFO] serf: EventMemberJoin: Node 15224.dc1 127.0.0.1
2016/06/23 07:47:11 [INFO] consul: adding WAN server Node 15224.dc1 (Addr: 127.0.0.1:15225) (DC: dc1)
2016/06/23 07:47:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:11 [INFO] raft: Node at 127.0.0.1:15225 [Candidate] entering Candidate state
2016/06/23 07:47:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:12 [DEBUG] raft: Vote granted from 127.0.0.1:15225. Tally: 1
2016/06/23 07:47:12 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:12 [INFO] raft: Node at 127.0.0.1:15225 [Leader] entering Leader state
2016/06/23 07:47:12 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:12 [INFO] consul: New leader elected: Node 15224
2016/06/23 07:47:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:12 [DEBUG] raft: Node 127.0.0.1:15225 updated peer set (2): [127.0.0.1:15225]
2016/06/23 07:47:12 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:12 [INFO] consul: member 'Node 15224' joined, marking health alive
2016/06/23 07:47:12 [INFO] consul: shutting down server
2016/06/23 07:47:12 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogListServiceNodes (1.93s)
=== RUN   TestCatalogListServiceNodes_DistanceSort
2016/06/23 07:47:13 [INFO] raft: Node at 127.0.0.1:15229 [Follower] entering Follower state
2016/06/23 07:47:13 [INFO] serf: EventMemberJoin: Node 15228 127.0.0.1
2016/06/23 07:47:13 [INFO] consul: adding LAN server Node 15228 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/06/23 07:47:13 [INFO] serf: EventMemberJoin: Node 15228.dc1 127.0.0.1
2016/06/23 07:47:13 [INFO] consul: adding WAN server Node 15228.dc1 (Addr: 127.0.0.1:15229) (DC: dc1)
2016/06/23 07:47:13 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:13 [INFO] raft: Node at 127.0.0.1:15229 [Candidate] entering Candidate state
2016/06/23 07:47:14 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:14 [DEBUG] raft: Vote granted from 127.0.0.1:15229. Tally: 1
2016/06/23 07:47:14 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:14 [INFO] raft: Node at 127.0.0.1:15229 [Leader] entering Leader state
2016/06/23 07:47:14 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:14 [INFO] consul: New leader elected: Node 15228
2016/06/23 07:47:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:14 [DEBUG] raft: Node 127.0.0.1:15229 updated peer set (2): [127.0.0.1:15229]
2016/06/23 07:47:14 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:14 [INFO] consul: member 'Node 15228' joined, marking health alive
2016/06/23 07:47:15 [INFO] consul: shutting down server
2016/06/23 07:47:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:15 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:15 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:47:15 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogListServiceNodes_DistanceSort (2.32s)
=== RUN   TestCatalogNodeServices
2016/06/23 07:47:15 [INFO] raft: Node at 127.0.0.1:15233 [Follower] entering Follower state
2016/06/23 07:47:15 [INFO] serf: EventMemberJoin: Node 15232 127.0.0.1
2016/06/23 07:47:15 [INFO] consul: adding LAN server Node 15232 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/06/23 07:47:15 [INFO] serf: EventMemberJoin: Node 15232.dc1 127.0.0.1
2016/06/23 07:47:15 [INFO] consul: adding WAN server Node 15232.dc1 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/06/23 07:47:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:15 [INFO] raft: Node at 127.0.0.1:15233 [Candidate] entering Candidate state
2016/06/23 07:47:16 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:16 [DEBUG] raft: Vote granted from 127.0.0.1:15233. Tally: 1
2016/06/23 07:47:16 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:16 [INFO] raft: Node at 127.0.0.1:15233 [Leader] entering Leader state
2016/06/23 07:47:16 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:16 [INFO] consul: New leader elected: Node 15232
2016/06/23 07:47:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:16 [DEBUG] raft: Node 127.0.0.1:15233 updated peer set (2): [127.0.0.1:15233]
2016/06/23 07:47:17 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:17 [INFO] consul: member 'Node 15232' joined, marking health alive
2016/06/23 07:47:17 [INFO] consul: shutting down server
2016/06/23 07:47:17 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:17 [WARN] serf: Shutdown without a Leave
--- PASS: TestCatalogNodeServices (2.72s)
=== RUN   TestCatalogRegister_FailedCase1
2016/06/23 07:47:18 [INFO] raft: Node at 127.0.0.1:15237 [Follower] entering Follower state
2016/06/23 07:47:18 [INFO] serf: EventMemberJoin: Node 15236 127.0.0.1
2016/06/23 07:47:18 [INFO] consul: adding LAN server Node 15236 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/06/23 07:47:18 [INFO] serf: EventMemberJoin: Node 15236.dc1 127.0.0.1
2016/06/23 07:47:18 [INFO] consul: adding WAN server Node 15236.dc1 (Addr: 127.0.0.1:15237) (DC: dc1)
2016/06/23 07:47:18 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:18 [INFO] raft: Node at 127.0.0.1:15237 [Candidate] entering Candidate state
2016/06/23 07:47:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:19 [DEBUG] raft: Vote granted from 127.0.0.1:15237. Tally: 1
2016/06/23 07:47:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:19 [INFO] raft: Node at 127.0.0.1:15237 [Leader] entering Leader state
2016/06/23 07:47:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:19 [INFO] consul: New leader elected: Node 15236
2016/06/23 07:47:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:20 [DEBUG] raft: Node 127.0.0.1:15237 updated peer set (2): [127.0.0.1:15237]
2016/06/23 07:47:20 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:20 [INFO] consul: member 'Node 15236' joined, marking health alive
2016/06/23 07:47:21 [INFO] consul: shutting down server
2016/06/23 07:47:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:21 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:47:21 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalogRegister_FailedCase1 (3.85s)
=== RUN   TestCatalog_ListServices_FilterACL
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:15241 [Follower] entering Follower state
2016/06/23 07:47:22 [INFO] serf: EventMemberJoin: Node 15240 127.0.0.1
2016/06/23 07:47:22 [INFO] consul: adding LAN server Node 15240 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/06/23 07:47:22 [INFO] serf: EventMemberJoin: Node 15240.dc1 127.0.0.1
2016/06/23 07:47:22 [INFO] consul: adding WAN server Node 15240.dc1 (Addr: 127.0.0.1:15241) (DC: dc1)
2016/06/23 07:47:22 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:15241 [Candidate] entering Candidate state
2016/06/23 07:47:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:22 [DEBUG] raft: Vote granted from 127.0.0.1:15241. Tally: 1
2016/06/23 07:47:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:22 [INFO] raft: Node at 127.0.0.1:15241 [Leader] entering Leader state
2016/06/23 07:47:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:22 [INFO] consul: New leader elected: Node 15240
2016/06/23 07:47:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:23 [DEBUG] raft: Node 127.0.0.1:15241 updated peer set (2): [127.0.0.1:15241]
2016/06/23 07:47:23 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:23 [INFO] consul: member 'Node 15240' joined, marking health alive
2016/06/23 07:47:25 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/06/23 07:47:25 [INFO] consul: shutting down server
2016/06/23 07:47:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:26 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:26 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalog_ListServices_FilterACL (4.38s)
=== RUN   TestCatalog_ServiceNodes_FilterACL
2016/06/23 07:47:26 [INFO] raft: Node at 127.0.0.1:15245 [Follower] entering Follower state
2016/06/23 07:47:26 [INFO] serf: EventMemberJoin: Node 15244 127.0.0.1
2016/06/23 07:47:26 [INFO] consul: adding LAN server Node 15244 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/06/23 07:47:26 [INFO] serf: EventMemberJoin: Node 15244.dc1 127.0.0.1
2016/06/23 07:47:26 [INFO] consul: adding WAN server Node 15244.dc1 (Addr: 127.0.0.1:15245) (DC: dc1)
2016/06/23 07:47:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:26 [INFO] raft: Node at 127.0.0.1:15245 [Candidate] entering Candidate state
2016/06/23 07:47:27 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:27 [DEBUG] raft: Vote granted from 127.0.0.1:15245. Tally: 1
2016/06/23 07:47:27 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:27 [INFO] raft: Node at 127.0.0.1:15245 [Leader] entering Leader state
2016/06/23 07:47:27 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:27 [INFO] consul: New leader elected: Node 15244
2016/06/23 07:47:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:28 [DEBUG] raft: Node 127.0.0.1:15245 updated peer set (2): [127.0.0.1:15245]
2016/06/23 07:47:28 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:28 [INFO] consul: member 'Node 15244' joined, marking health alive
2016/06/23 07:47:30 [DEBUG] consul: dropping node "Node 15244" from result due to ACLs
2016/06/23 07:47:30 [INFO] consul: shutting down server
2016/06/23 07:47:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:30 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCatalog_ServiceNodes_FilterACL (4.67s)
=== RUN   TestCatalog_NodeServices_FilterACL
2016/06/23 07:47:32 [INFO] raft: Node at 127.0.0.1:15249 [Follower] entering Follower state
2016/06/23 07:47:32 [INFO] serf: EventMemberJoin: Node 15248 127.0.0.1
2016/06/23 07:47:32 [INFO] consul: adding LAN server Node 15248 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/06/23 07:47:32 [INFO] serf: EventMemberJoin: Node 15248.dc1 127.0.0.1
2016/06/23 07:47:32 [INFO] consul: adding WAN server Node 15248.dc1 (Addr: 127.0.0.1:15249) (DC: dc1)
2016/06/23 07:47:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:32 [INFO] raft: Node at 127.0.0.1:15249 [Candidate] entering Candidate state
2016/06/23 07:47:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:32 [DEBUG] raft: Vote granted from 127.0.0.1:15249. Tally: 1
2016/06/23 07:47:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:32 [INFO] raft: Node at 127.0.0.1:15249 [Leader] entering Leader state
2016/06/23 07:47:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:32 [INFO] consul: New leader elected: Node 15248
2016/06/23 07:47:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:33 [DEBUG] raft: Node 127.0.0.1:15249 updated peer set (2): [127.0.0.1:15249]
2016/06/23 07:47:33 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:33 [INFO] consul: member 'Node 15248' joined, marking health alive
2016/06/23 07:47:35 [DEBUG] consul: dropping service "bar" from result due to ACLs
2016/06/23 07:47:35 [INFO] consul: shutting down server
2016/06/23 07:47:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:35 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:35 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:47:35 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestCatalog_NodeServices_FilterACL (5.09s)
=== RUN   TestClient_StartStop
2016/06/23 07:47:35 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:35 [INFO] consul: shutting down client
2016/06/23 07:47:35 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_StartStop (0.10s)
=== RUN   TestClient_JoinLAN
2016/06/23 07:47:36 [INFO] raft: Node at 127.0.0.1:15255 [Follower] entering Follower state
2016/06/23 07:47:36 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:36 [INFO] raft: Node at 127.0.0.1:15255 [Candidate] entering Candidate state
2016/06/23 07:47:36 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/06/23 07:47:36 [INFO] consul: adding LAN server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/06/23 07:47:36 [INFO] serf: EventMemberJoin: Node 15254.dc1 127.0.0.1
2016/06/23 07:47:36 [INFO] consul: adding WAN server Node 15254.dc1 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/06/23 07:47:36 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:36 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15256
2016/06/23 07:47:36 [DEBUG] memberlist: TCP connection from=127.0.0.1:35716
2016/06/23 07:47:36 [INFO] serf: EventMemberJoin: Node 15254 127.0.0.1
2016/06/23 07:47:36 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:36 [INFO] consul: adding server Node 15254 (Addr: 127.0.0.1:15255) (DC: dc1)
2016/06/23 07:47:36 [INFO] consul: shutting down client
2016/06/23 07:47:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:36 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:47:37 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:47:37 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:47:37 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:47:37 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:47:37 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:47:37 [INFO] consul: shutting down server
2016/06/23 07:47:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:37 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN (1.61s)
=== RUN   TestClient_JoinLAN_Invalid
2016/06/23 07:47:38 [INFO] raft: Node at 127.0.0.1:15261 [Follower] entering Follower state
2016/06/23 07:47:38 [INFO] serf: EventMemberJoin: Node 15260 127.0.0.1
2016/06/23 07:47:38 [INFO] consul: adding LAN server Node 15260 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/06/23 07:47:38 [INFO] serf: EventMemberJoin: Node 15260.dc1 127.0.0.1
2016/06/23 07:47:38 [INFO] consul: adding WAN server Node 15260.dc1 (Addr: 127.0.0.1:15261) (DC: dc1)
2016/06/23 07:47:38 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:38 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15262
2016/06/23 07:47:38 [DEBUG] memberlist: TCP connection from=127.0.0.1:34052
2016/06/23 07:47:38 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' part of wrong datacenter 'other' from=127.0.0.1:34052
2016/06/23 07:47:38 [DEBUG] memberlist: Failed to join 127.0.0.1: Member 'Node 15260' part of wrong datacenter 'dc1'
2016/06/23 07:47:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:38 [INFO] raft: Node at 127.0.0.1:15261 [Candidate] entering Candidate state
2016/06/23 07:47:38 [INFO] consul: shutting down client
2016/06/23 07:47:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:38 [INFO] consul: shutting down server
2016/06/23 07:47:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:39 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinLAN_Invalid (1.41s)
=== RUN   TestClient_JoinWAN_Invalid
2016/06/23 07:47:39 [INFO] raft: Node at 127.0.0.1:15267 [Follower] entering Follower state
2016/06/23 07:47:39 [INFO] serf: EventMemberJoin: Node 15266 127.0.0.1
2016/06/23 07:47:39 [INFO] consul: adding LAN server Node 15266 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/06/23 07:47:39 [INFO] serf: EventMemberJoin: Node 15266.dc1 127.0.0.1
2016/06/23 07:47:39 [INFO] consul: adding WAN server Node 15266.dc1 (Addr: 127.0.0.1:15267) (DC: dc1)
2016/06/23 07:47:39 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15269
2016/06/23 07:47:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:50432
2016/06/23 07:47:39 [ERR] memberlist: Failed push/pull merge: Member 'testco.internal' is not a server from=127.0.0.1:50432
2016/06/23 07:47:39 [DEBUG] memberlist: Failed to join 127.0.0.1: Member 'Node 15266.dc1' part of wrong datacenter 'dc1'
2016/06/23 07:47:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:39 [INFO] raft: Node at 127.0.0.1:15267 [Candidate] entering Candidate state
2016/06/23 07:47:39 [INFO] consul: shutting down client
2016/06/23 07:47:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:39 [INFO] consul: shutting down server
2016/06/23 07:47:39 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:40 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_JoinWAN_Invalid (1.25s)
=== RUN   TestClient_RPC
2016/06/23 07:47:40 [INFO] raft: Node at 127.0.0.1:15273 [Follower] entering Follower state
2016/06/23 07:47:40 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/06/23 07:47:40 [INFO] consul: adding LAN server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/06/23 07:47:40 [INFO] serf: EventMemberJoin: Node 15272.dc1 127.0.0.1
2016/06/23 07:47:40 [INFO] consul: adding WAN server Node 15272.dc1 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/06/23 07:47:40 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:40 [DEBUG] memberlist: TCP connection from=127.0.0.1:59858
2016/06/23 07:47:40 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15274
2016/06/23 07:47:40 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:40 [INFO] serf: EventMemberJoin: Node 15272 127.0.0.1
2016/06/23 07:47:40 [INFO] consul: adding server Node 15272 (Addr: 127.0.0.1:15273) (DC: dc1)
2016/06/23 07:47:40 [INFO] consul: shutting down client
2016/06/23 07:47:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:40 [INFO] raft: Node at 127.0.0.1:15273 [Candidate] entering Candidate state
2016/06/23 07:47:41 [INFO] consul: shutting down server
2016/06/23 07:47:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:41 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:47:41 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:47:41 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:41 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:47:41 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:47:41 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC (1.31s)
=== RUN   TestClient_RPC_Pool
2016/06/23 07:47:42 [INFO] raft: Node at 127.0.0.1:15279 [Follower] entering Follower state
2016/06/23 07:47:42 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/06/23 07:47:42 [INFO] serf: EventMemberJoin: Node 15278.dc1 127.0.0.1
2016/06/23 07:47:42 [INFO] consul: adding WAN server Node 15278.dc1 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/06/23 07:47:42 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:42 [INFO] consul: adding LAN server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/06/23 07:47:42 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15280
2016/06/23 07:47:42 [DEBUG] memberlist: TCP connection from=127.0.0.1:39270
2016/06/23 07:47:42 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:47:42 [INFO] serf: EventMemberJoin: Node 15278 127.0.0.1
2016/06/23 07:47:42 [INFO] consul: adding server Node 15278 (Addr: 127.0.0.1:15279) (DC: dc1)
2016/06/23 07:47:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:42 [INFO] raft: Node at 127.0.0.1:15279 [Candidate] entering Candidate state
2016/06/23 07:47:42 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:47:42 [INFO] consul: shutting down client
2016/06/23 07:47:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:42 [INFO] consul: shutting down server
2016/06/23 07:47:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:42 [WARN] serf: Shutdown without a Leave
2016/06/23 07:47:42 [DEBUG] raft: Votes needed: 1
--- PASS: TestClient_RPC_Pool (1.35s)
=== RUN   TestClient_RPC_TLS
2016/06/23 07:47:43 [INFO] raft: Node at 127.0.0.1:15284 [Follower] entering Follower state
2016/06/23 07:47:43 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/06/23 07:47:43 [INFO] consul: adding LAN server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/06/23 07:47:43 [INFO] serf: EventMemberJoin: a.testco.internal.dc1 127.0.0.1
2016/06/23 07:47:43 [INFO] consul: adding WAN server a.testco.internal.dc1 (Addr: 127.0.0.1:15284) (DC: dc1)
2016/06/23 07:47:43 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/06/23 07:47:43 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/06/23 07:47:43 [DEBUG] memberlist: TCP connection from=127.0.0.1:48886
2016/06/23 07:47:43 [INFO] serf: EventMemberJoin: b.testco.internal 127.0.0.1
2016/06/23 07:47:43 [INFO] serf: EventMemberJoin: a.testco.internal 127.0.0.1
2016/06/23 07:47:43 [INFO] consul: adding server a.testco.internal (Addr: 127.0.0.1:15284) (DC: dc1)
2016/06/23 07:47:43 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:47:43 [INFO] raft: Node at 127.0.0.1:15284 [Candidate] entering Candidate state
2016/06/23 07:47:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45634
2016/06/23 07:47:43 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:43 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:43 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45636
2016/06/23 07:47:43 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:44 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:44 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:44 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45638
2016/06/23 07:47:44 [DEBUG] serf: messageJoinType: b.testco.internal
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45640
2016/06/23 07:47:44 [DEBUG] memberlist: Potential blocking operation. Last command took 10.109643ms
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45642
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45644
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45646
2016/06/23 07:47:44 [DEBUG] raft: Votes needed: 1
2016/06/23 07:47:44 [DEBUG] raft: Vote granted from 127.0.0.1:15284. Tally: 1
2016/06/23 07:47:44 [INFO] raft: Election won. Tally: 1
2016/06/23 07:47:44 [INFO] raft: Node at 127.0.0.1:15284 [Leader] entering Leader state
2016/06/23 07:47:44 [INFO] consul: cluster leadership acquired
2016/06/23 07:47:44 [INFO] consul: New leader elected: a.testco.internal
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [INFO] consul: New leader elected: a.testco.internal
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45648
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45650
2016/06/23 07:47:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:47:44 [DEBUG] raft: Node 127.0.0.1:15284 updated peer set (2): [127.0.0.1:15284]
2016/06/23 07:47:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45652
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45654
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45656
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45658
2016/06/23 07:47:45 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:47:45 [INFO] consul: member 'a.testco.internal' joined, marking health alive
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45660
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45662
2016/06/23 07:47:45 [INFO] consul: member 'b.testco.internal' joined, marking health alive
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45664
2016/06/23 07:47:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45666
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45668
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45670
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45672
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45674
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45676
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45678
2016/06/23 07:47:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45680
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45682
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45684
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45686
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45688
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45690
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45692
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45694
2016/06/23 07:47:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45696
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45698
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45700
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45702
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45704
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45706
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45708
2016/06/23 07:47:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45710
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45712
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45714
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45716
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45718
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45720
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45722
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45724
2016/06/23 07:47:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45726
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45728
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45730
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45732
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45734
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45736
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45738
2016/06/23 07:47:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45740
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45742
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45744
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45746
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45748
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45750
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45752
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45754
2016/06/23 07:47:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45756
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45758
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45760
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45762
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45764
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45766
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45768
2016/06/23 07:47:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45770
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45772
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45774
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45776
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45778
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45780
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45782
2016/06/23 07:47:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45784
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45786
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45788
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45790
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45792
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45794
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45796
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45798
2016/06/23 07:47:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45800
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45802
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45804
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45806
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45808
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45810
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45812
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45814
2016/06/23 07:47:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45816
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45818
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45820
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45822
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45824
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45826
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45828
2016/06/23 07:47:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45830
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45832
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45834
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45836
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45838
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45840
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45842
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45844
2016/06/23 07:47:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45846
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45848
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45850
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45852
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45854
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45856
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45858
2016/06/23 07:47:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45860
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45862
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45864
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45866
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45868
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45870
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45872
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45874
2016/06/23 07:47:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45876
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45878
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45880
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45882
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45884
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45886
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45888
2016/06/23 07:48:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45890
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45892
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45894
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45896
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45898
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45900
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45902
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45904
2016/06/23 07:48:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45906
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45908
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45910
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45912
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45914
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45916
2016/06/23 07:48:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45918
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45920
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45922
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45924
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45926
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45928
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45930
2016/06/23 07:48:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45932
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45934
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45936
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45938
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45940
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45942
2016/06/23 07:48:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45944
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45946
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45948
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45950
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45952
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45954
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45956
2016/06/23 07:48:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45958
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45960
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45962
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45964
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45966
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45968
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45970
2016/06/23 07:48:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45972
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45974
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45976
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45978
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45980
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45982
2016/06/23 07:48:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45984
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45988
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45990
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45992
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45994
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45996
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:45998
2016/06/23 07:48:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46000
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46002
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46004
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46006
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46008
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46010
2016/06/23 07:48:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46012
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46014
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46016
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46018
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46020
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46022
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46024
2016/06/23 07:48:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46026
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46028
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46030
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46032
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46034
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46036
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46038
2016/06/23 07:48:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46040
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46042
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46044
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46046
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46048
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46050
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46052
2016/06/23 07:48:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46054
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46056
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46058
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46060
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46062
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46064
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46066
2016/06/23 07:48:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46068
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46070
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46072
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46074
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46076
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46078
2016/06/23 07:48:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46080
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46082
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46084
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46086
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46088
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46090
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46092
2016/06/23 07:48:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46094
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46096
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46098
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46100
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46102
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46104
2016/06/23 07:48:16 [DEBUG] memberlist: Potential blocking operation. Last command took 15.56681ms
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46106
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46108
2016/06/23 07:48:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46110
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46112
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46114
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46116
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46118
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46120
2016/06/23 07:48:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46122
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46124
2016/06/23 07:48:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15288
2016/06/23 07:48:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:54870
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46126
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46130
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46132
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46134
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46136
2016/06/23 07:48:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46138
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46140
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46142
2016/06/23 07:48:19 [DEBUG] memberlist: Potential blocking operation. Last command took 14.005096ms
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46144
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46146
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46148
2016/06/23 07:48:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46150
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46152
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46154
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46156
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46158
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46160
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46162
2016/06/23 07:48:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46164
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46166
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46168
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46170
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46172
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46174
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46176
2016/06/23 07:48:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46178
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46180
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46182
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46184
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46186
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46188
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46190
2016/06/23 07:48:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46192
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46194
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46196
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46198
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46200
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46202
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46204
2016/06/23 07:48:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46206
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46208
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46210
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46212
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46214
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46216
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46218
2016/06/23 07:48:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46220
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46222
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46224
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46226
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46228
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46230
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46232
2016/06/23 07:48:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46234
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46236
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46238
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46240
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46242
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46244
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46246
2016/06/23 07:48:26 [DEBUG] memberlist: Potential blocking operation. Last command took 16.344167ms
2016/06/23 07:48:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46248
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46250
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46252
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46254
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46256
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46258
2016/06/23 07:48:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46260
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46262
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46264
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46266
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46268
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46270
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46272
2016/06/23 07:48:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46274
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46276
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46278
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46280
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46282
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46284
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46286
2016/06/23 07:48:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46288
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46290
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46292
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46294
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46296
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46298
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46300
2016/06/23 07:48:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46304
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46306
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46308
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46310
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46312
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46314
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46316
2016/06/23 07:48:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46318
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46320
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46322
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46324
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46326
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46328
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46330
2016/06/23 07:48:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46332
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46334
2016/06/23 07:48:33 [DEBUG] memberlist: Potential blocking operation. Last command took 13.299407ms
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46336
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46338
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46340
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46342
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46344
2016/06/23 07:48:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46346
2016/06/23 07:48:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46348
2016/06/23 07:48:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46350
2016/06/23 07:48:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46352
2016/06/23 07:48:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46354
2016/06/23 07:48:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46356
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46358
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46360
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46362
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46364
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46366
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46368
2016/06/23 07:48:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46370
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46372
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46374
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46376
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46378
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46380
2016/06/23 07:48:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46382
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46384
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46386
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46388
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46390
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46392
2016/06/23 07:48:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46394
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46396
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46398
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46400
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46402
2016/06/23 07:48:38 [DEBUG] memberlist: Potential blocking operation. Last command took 37.064468ms
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46404
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46406
2016/06/23 07:48:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46408
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46410
2016/06/23 07:48:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/06/23 07:48:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:49668
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46412
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46416
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46418
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46420
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46422
2016/06/23 07:48:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46424
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46426
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46428
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46430
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46432
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46434
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46436
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46438
2016/06/23 07:48:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46440
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46442
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46444
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46446
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46448
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46450
2016/06/23 07:48:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46452
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46454
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46456
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46458
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46460
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46462
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46464
2016/06/23 07:48:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46466
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46468
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46470
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46472
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46474
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46476
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46478
2016/06/23 07:48:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46480
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46482
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46484
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46486
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46488
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46490
2016/06/23 07:48:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46492
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46494
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46496
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46498
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46500
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46502
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46504
2016/06/23 07:48:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46506
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46508
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46510
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46512
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46514
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46516
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46518
2016/06/23 07:48:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46520
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46522
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46524
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46526
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46528
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46530
2016/06/23 07:48:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46532
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46534
2016/06/23 07:48:48 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15288
2016/06/23 07:48:48 [DEBUG] memberlist: TCP connection from=127.0.0.1:55280
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46536
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46540
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46542
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46544
2016/06/23 07:48:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46546
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46548
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46550
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46552
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46554
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46556
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46558
2016/06/23 07:48:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46560
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46562
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46564
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46566
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46568
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46570
2016/06/23 07:48:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46572
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46574
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46576
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46578
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46580
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46582
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46584
2016/06/23 07:48:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46586
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46588
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46590
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46592
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46594
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46596
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46598
2016/06/23 07:48:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46600
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46602
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46604
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46606
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46608
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46610
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46612
2016/06/23 07:48:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46614
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46616
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46618
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46620
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46622
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46624
2016/06/23 07:48:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46626
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46628
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46630
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46632
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46636
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46638
2016/06/23 07:48:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46640
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46642
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46644
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46646
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46648
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46650
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46652
2016/06/23 07:48:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46654
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46656
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46658
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46660
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46662
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46664
2016/06/23 07:48:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46666
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46668
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46670
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46672
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46674
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46676
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46678
2016/06/23 07:48:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46680
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46682
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46684
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46686
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46688
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46690
2016/06/23 07:48:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46692
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46694
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46696
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46698
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46700
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46702
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46704
2016/06/23 07:49:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46706
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46708
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46710
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46712
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46714
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46716
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46718
2016/06/23 07:49:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46720
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46722
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46724
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46726
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46728
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46730
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46732
2016/06/23 07:49:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46734
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46736
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46738
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46740
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46742
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46744
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46746
2016/06/23 07:49:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46748
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46750
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46752
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46754
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46756
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46758
2016/06/23 07:49:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46760
2016/06/23 07:49:04 [DEBUG] memberlist: Potential blocking operation. Last command took 16.626842ms
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46762
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46764
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46766
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46768
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46770
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46772
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46774
2016/06/23 07:49:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46776
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46778
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46780
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46782
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46784
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46786
2016/06/23 07:49:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46788
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46790
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46792
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46794
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46796
2016/06/23 07:49:07 [DEBUG] memberlist: Potential blocking operation. Last command took 10.070975ms
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46798
2016/06/23 07:49:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46800
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46804
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46806
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46808
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46810
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46812
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46814
2016/06/23 07:49:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46816
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46818
2016/06/23 07:49:09 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/06/23 07:49:09 [DEBUG] memberlist: TCP connection from=127.0.0.1:50074
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46822
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46824
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46826
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46828
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46830
2016/06/23 07:49:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46832
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46834
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46836
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46838
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46840
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46842
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46844
2016/06/23 07:49:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46846
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46848
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46850
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46852
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46854
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46856
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46858
2016/06/23 07:49:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46860
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46862
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46864
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46866
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46868
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46870
2016/06/23 07:49:12 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46872
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46874
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46876
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46878
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46880
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46882
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46884
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46886
2016/06/23 07:49:13 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46888
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46890
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46892
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46894
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46896
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46898
2016/06/23 07:49:14 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46900
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46902
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46904
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46906
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46908
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46910
2016/06/23 07:49:15 [DEBUG] memberlist: Potential blocking operation. Last command took 10.592324ms
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46912
2016/06/23 07:49:15 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46914
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46916
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46918
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46920
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46922
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46924
2016/06/23 07:49:16 [DEBUG] memberlist: Potential blocking operation. Last command took 10.693327ms
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46926
2016/06/23 07:49:16 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46928
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46930
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46932
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46934
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46936
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46938
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46940
2016/06/23 07:49:17 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46942
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46944
2016/06/23 07:49:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:55688
2016/06/23 07:49:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15288
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46948
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46950
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46952
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46954
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46956
2016/06/23 07:49:18 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46958
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46960
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46962
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46964
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46966
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46968
2016/06/23 07:49:19 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46970
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46972
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46974
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46976
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46978
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46980
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46982
2016/06/23 07:49:20 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46984
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46986
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46988
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46990
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46992
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46994
2016/06/23 07:49:21 [DEBUG] memberlist: Potential blocking operation. Last command took 11.527352ms
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46996
2016/06/23 07:49:21 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:46998
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47000
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47002
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47004
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47006
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47008
2016/06/23 07:49:22 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47010
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47012
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47014
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47016
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47018
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47020
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47022
2016/06/23 07:49:23 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47024
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47026
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47028
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47030
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47032
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47034
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47036
2016/06/23 07:49:24 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47038
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47040
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47042
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47044
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47046
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47048
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47050
2016/06/23 07:49:25 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47052
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47054
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47056
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47058
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47060
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47062
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47064
2016/06/23 07:49:26 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47066
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47068
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47070
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47072
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47074
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47076
2016/06/23 07:49:27 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47078
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47080
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47082
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47084
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47086
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47088
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47090
2016/06/23 07:49:28 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47092
2016/06/23 07:49:29 [DEBUG] memberlist: Potential blocking operation. Last command took 13.525747ms
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47094
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47096
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47098
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47100
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47102
2016/06/23 07:49:29 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47104
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47106
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47108
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47110
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47112
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47114
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47116
2016/06/23 07:49:30 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47118
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47120
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47122
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47124
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47126
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47128
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47130
2016/06/23 07:49:31 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47132
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47134
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47136
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47138
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47140
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47142
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47144
2016/06/23 07:49:32 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47146
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47148
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47150
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47152
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47154
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47156
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47158
2016/06/23 07:49:33 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47160
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47162
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47164
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47166
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47168
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47170
2016/06/23 07:49:34 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47172
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47174
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47176
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47178
2016/06/23 07:49:35 [DEBUG] memberlist: Potential blocking operation. Last command took 27.037494ms
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47180
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47182
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47184
2016/06/23 07:49:35 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47186
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47188
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47190
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47192
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47194
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47196
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47198
2016/06/23 07:49:36 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47200
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47202
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47204
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47206
2016/06/23 07:49:37 [DEBUG] memberlist: Potential blocking operation. Last command took 14.147433ms
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47208
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47210
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47212
2016/06/23 07:49:37 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47214
2016/06/23 07:49:38 [DEBUG] memberlist: Potential blocking operation. Last command took 31.425962ms
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47216
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47218
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47220
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47222
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47224
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47226
2016/06/23 07:49:38 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47228
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47230
2016/06/23 07:49:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/06/23 07:49:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:50486
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47234
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47236
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47238
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47240
2016/06/23 07:49:39 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47242
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47244
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47246
2016/06/23 07:49:40 [DEBUG] memberlist: Potential blocking operation. Last command took 19.259923ms
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47248
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47250
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47252
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47254
2016/06/23 07:49:40 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47256
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47258
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47260
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47262
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47264
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47266
2016/06/23 07:49:41 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47268
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47270
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47272
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47274
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47276
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47278
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47280
2016/06/23 07:49:42 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47282
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47284
2016/06/23 07:49:43 [DEBUG] memberlist: Potential blocking operation. Last command took 10.408319ms
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47286
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47288
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47290
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47292
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47294
2016/06/23 07:49:43 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47296
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47298
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47300
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47302
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47304
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47306
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47308
2016/06/23 07:49:44 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47310
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47312
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47314
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47316
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47318
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47320
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47322
2016/06/23 07:49:45 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47324
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47326
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47328
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47330
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47332
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47334
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47336
2016/06/23 07:49:46 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47338
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47340
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47342
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47344
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47346
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47348
2016/06/23 07:49:47 [DEBUG] memberlist: Potential blocking operation. Last command took 12.017034ms
2016/06/23 07:49:47 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47350
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47352
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47354
2016/06/23 07:49:48 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15288
2016/06/23 07:49:48 [DEBUG] memberlist: TCP connection from=127.0.0.1:56098
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47358
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47360
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47362
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47364
2016/06/23 07:49:48 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47366
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47368
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47370
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47372
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47374
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47376
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47378
2016/06/23 07:49:49 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47380
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47382
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47384
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47386
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47388
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47390
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47392
2016/06/23 07:49:50 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47394
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47396
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47398
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47400
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47402
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47404
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47406
2016/06/23 07:49:51 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47408
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47410
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47412
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47414
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47416
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47418
2016/06/23 07:49:52 [DEBUG] memberlist: Potential blocking operation. Last command took 13.453079ms
2016/06/23 07:49:52 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47420
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47422
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47424
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47426
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47428
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47430
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47432
2016/06/23 07:49:53 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47434
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47436
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47438
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47440
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47442
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47444
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47446
2016/06/23 07:49:54 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47448
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47450
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47452
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47454
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47458
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47460
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47462
2016/06/23 07:49:55 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47464
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47466
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47468
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47470
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47472
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47474
2016/06/23 07:49:56 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47476
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47478
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47480
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47482
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47484
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47486
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47488
2016/06/23 07:49:57 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47490
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47492
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47494
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47496
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47498
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47500
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47502
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47504
2016/06/23 07:49:58 [DEBUG] memberlist: Potential blocking operation. Last command took 11.728026ms
2016/06/23 07:49:58 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47506
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47508
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47510
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47512
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47514
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47518
2016/06/23 07:49:59 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47520
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47522
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47524
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47526
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47528
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47530
2016/06/23 07:50:00 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47532
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47534
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47536
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47538
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47540
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47542
2016/06/23 07:50:01 [DEBUG] memberlist: Potential blocking operation. Last command took 21.537326ms
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47544
2016/06/23 07:50:01 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47546
2016/06/23 07:50:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47548
2016/06/23 07:50:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47550
2016/06/23 07:50:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47552
2016/06/23 07:50:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47554
2016/06/23 07:50:02 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47556
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47558
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47560
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47562
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47564
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47566
2016/06/23 07:50:03 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47568
2016/06/23 07:50:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47570
2016/06/23 07:50:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47572
2016/06/23 07:50:04 [DEBUG] memberlist: Potential blocking operation. Last command took 11.279345ms
2016/06/23 07:50:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47574
2016/06/23 07:50:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47576
2016/06/23 07:50:04 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47580
2016/06/23 07:50:04 [DEBUG] memberlist: Potential blocking operation. Last command took 23.994068ms
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47582
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47586
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47588
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47590
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47592
2016/06/23 07:50:05 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47594
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47596
2016/06/23 07:50:06 [DEBUG] memberlist: Potential blocking operation. Last command took 12.671388ms
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47598
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47600
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47602
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47604
2016/06/23 07:50:06 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47606
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47608
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47610
2016/06/23 07:50:07 [DEBUG] memberlist: Potential blocking operation. Last command took 31.156287ms
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47614
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47616
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47618
2016/06/23 07:50:07 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47620
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47624
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47626
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47628
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47630
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47632
2016/06/23 07:50:08 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47634
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47636
2016/06/23 07:50:09 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15285
2016/06/23 07:50:09 [DEBUG] memberlist: TCP connection from=127.0.0.1:50892
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47640
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47642
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47644
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47646
2016/06/23 07:50:09 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47648
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47650
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47652
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47654
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47656
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47658
2016/06/23 07:50:10 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47660
2016/06/23 07:50:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47662
2016/06/23 07:50:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47664
2016/06/23 07:50:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47666
2016/06/23 07:50:11 [ERR] consul.rpc: failed to read byte: tls: failed to verify client's certificate: x509: certificate has expired or is not yet valid from=127.0.0.1:47668
2016/06/23 07:50:11 [INFO] consul: shutting down client
2016/06/23 07:50:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:11 [INFO] consul: shutting down server
2016/06/23 07:50:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:11 [DEBUG] memberlist: Failed UDP ping: b.testco.internal (timeout reached)
2016/06/23 07:50:11 [INFO] memberlist: Suspect b.testco.internal has failed, no acks received
2016/06/23 07:50:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:11 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestClient_RPC_TLS (149.01s)
	client_test.go:283: err: rpc error: failed to get conn: remote error: bad certificate
=== RUN   TestClientServer_UserEvent
2016/06/23 07:50:11 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/06/23 07:50:12 [INFO] memberlist: Marking b.testco.internal as failed, suspect timeout reached
2016/06/23 07:50:12 [INFO] serf: EventMemberFailed: b.testco.internal 127.0.0.1
2016/06/23 07:50:12 [INFO] raft: Node at 127.0.0.1:15293 [Follower] entering Follower state
2016/06/23 07:50:12 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/06/23 07:50:12 [INFO] consul: adding LAN server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/06/23 07:50:12 [INFO] serf: EventMemberJoin: Node 15292.dc1 127.0.0.1
2016/06/23 07:50:12 [INFO] consul: adding WAN server Node 15292.dc1 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/06/23 07:50:12 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15294
2016/06/23 07:50:12 [DEBUG] memberlist: TCP connection from=127.0.0.1:36632
2016/06/23 07:50:12 [INFO] serf: EventMemberJoin: Client 15289 127.0.0.1
2016/06/23 07:50:12 [INFO] serf: EventMemberJoin: Node 15292 127.0.0.1
2016/06/23 07:50:12 [INFO] consul: adding server Node 15292 (Addr: 127.0.0.1:15293) (DC: dc1)
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:12 [INFO] raft: Node at 127.0.0.1:15293 [Candidate] entering Candidate state
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:12 [DEBUG] serf: messageJoinType: Client 15289
2016/06/23 07:50:13 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:13 [DEBUG] raft: Vote granted from 127.0.0.1:15293. Tally: 1
2016/06/23 07:50:13 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:13 [INFO] raft: Node at 127.0.0.1:15293 [Leader] entering Leader state
2016/06/23 07:50:13 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:13 [INFO] consul: New leader elected: Node 15292
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [INFO] consul: New leader elected: Node 15292
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:50:13 [DEBUG] raft: Node 127.0.0.1:15293 updated peer set (2): [127.0.0.1:15293]
2016/06/23 07:50:13 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:13 [INFO] consul: member 'Node 15292' joined, marking health alive
2016/06/23 07:50:13 [INFO] consul: member 'Client 15289' joined, marking health alive
2016/06/23 07:50:13 [DEBUG] consul: user event: foo
2016/06/23 07:50:13 [DEBUG] serf: messageUserEventType: consul:event:foo
2016/06/23 07:50:13 [DEBUG] consul: user event: foo
2016/06/23 07:50:13 [INFO] consul: shutting down server
2016/06/23 07:50:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:14 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:50:14 [INFO] consul: shutting down client
2016/06/23 07:50:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientServer_UserEvent (2.21s)
=== RUN   TestClient_Encrypted
2016/06/23 07:50:14 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:50:14 [INFO] serf: EventMemberJoin: Client 15298 127.0.0.1
2016/06/23 07:50:14 [INFO] consul: shutting down client
2016/06/23 07:50:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:14 [INFO] consul: shutting down client
2016/06/23 07:50:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestClient_Encrypted (0.28s)
=== RUN   TestCoordinate_Update
2016/06/23 07:50:15 [INFO] raft: Node at 127.0.0.1:15302 [Follower] entering Follower state
2016/06/23 07:50:15 [INFO] serf: EventMemberJoin: Node 15301 127.0.0.1
2016/06/23 07:50:15 [INFO] consul: adding LAN server Node 15301 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/06/23 07:50:15 [INFO] serf: EventMemberJoin: Node 15301.dc1 127.0.0.1
2016/06/23 07:50:15 [INFO] consul: adding WAN server Node 15301.dc1 (Addr: 127.0.0.1:15302) (DC: dc1)
2016/06/23 07:50:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:15 [INFO] raft: Node at 127.0.0.1:15302 [Candidate] entering Candidate state
2016/06/23 07:50:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:15 [DEBUG] raft: Vote granted from 127.0.0.1:15302. Tally: 1
2016/06/23 07:50:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:15 [INFO] raft: Node at 127.0.0.1:15302 [Leader] entering Leader state
2016/06/23 07:50:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:15 [INFO] consul: New leader elected: Node 15301
2016/06/23 07:50:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:16 [DEBUG] raft: Node 127.0.0.1:15302 updated peer set (2): [127.0.0.1:15302]
2016/06/23 07:50:16 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:16 [INFO] consul: member 'Node 15301' joined, marking health alive
2016/06/23 07:50:23 [WARN] consul.coordinate: Discarded 1 coordinate updates
2016/06/23 07:50:24 [INFO] consul: shutting down server
2016/06/23 07:50:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:25 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- FAIL: TestCoordinate_Update (10.60s)
	coordinate_endpoint_test.go:178: wrong number of coordinates dropped, 6 != 1
=== RUN   TestCoordinate_ListDatacenters
2016/06/23 07:50:25 [INFO] raft: Node at 127.0.0.1:15306 [Follower] entering Follower state
2016/06/23 07:50:25 [INFO] serf: EventMemberJoin: Node 15305 127.0.0.1
2016/06/23 07:50:25 [INFO] consul: adding LAN server Node 15305 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/06/23 07:50:25 [INFO] serf: EventMemberJoin: Node 15305.dc1 127.0.0.1
2016/06/23 07:50:25 [INFO] consul: adding WAN server Node 15305.dc1 (Addr: 127.0.0.1:15306) (DC: dc1)
2016/06/23 07:50:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:25 [INFO] raft: Node at 127.0.0.1:15306 [Candidate] entering Candidate state
2016/06/23 07:50:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:26 [DEBUG] raft: Vote granted from 127.0.0.1:15306. Tally: 1
2016/06/23 07:50:26 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:26 [INFO] raft: Node at 127.0.0.1:15306 [Leader] entering Leader state
2016/06/23 07:50:26 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:26 [INFO] consul: New leader elected: Node 15305
2016/06/23 07:50:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:27 [DEBUG] raft: Node 127.0.0.1:15306 updated peer set (2): [127.0.0.1:15306]
2016/06/23 07:50:27 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:27 [INFO] consul: member 'Node 15305' joined, marking health alive
2016/06/23 07:50:27 [INFO] consul: shutting down server
2016/06/23 07:50:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:27 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestCoordinate_ListDatacenters (2.45s)
=== RUN   TestCoordinate_ListNodes
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:15310 [Follower] entering Follower state
2016/06/23 07:50:28 [INFO] serf: EventMemberJoin: Node 15309 127.0.0.1
2016/06/23 07:50:28 [INFO] consul: adding LAN server Node 15309 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/06/23 07:50:28 [INFO] serf: EventMemberJoin: Node 15309.dc1 127.0.0.1
2016/06/23 07:50:28 [INFO] consul: adding WAN server Node 15309.dc1 (Addr: 127.0.0.1:15310) (DC: dc1)
2016/06/23 07:50:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:15310 [Candidate] entering Candidate state
2016/06/23 07:50:28 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:28 [DEBUG] raft: Vote granted from 127.0.0.1:15310. Tally: 1
2016/06/23 07:50:28 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:28 [INFO] raft: Node at 127.0.0.1:15310 [Leader] entering Leader state
2016/06/23 07:50:28 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:28 [INFO] consul: New leader elected: Node 15309
2016/06/23 07:50:28 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:29 [DEBUG] raft: Node 127.0.0.1:15310 updated peer set (2): [127.0.0.1:15310]
2016/06/23 07:50:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:29 [INFO] consul: member 'Node 15309' joined, marking health alive
2016/06/23 07:50:31 [INFO] consul: shutting down server
2016/06/23 07:50:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:50:31 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- FAIL: TestCoordinate_ListNodes (3.84s)
	coordinate_endpoint_test.go:286: bad: []
=== RUN   TestFilterDirEnt
--- PASS: TestFilterDirEnt (0.00s)
=== RUN   TestKeys
--- PASS: TestKeys (0.00s)
=== RUN   TestFSM_RegisterNode
--- PASS: TestFSM_RegisterNode (0.00s)
=== RUN   TestFSM_RegisterNode_Service
--- PASS: TestFSM_RegisterNode_Service (0.00s)
=== RUN   TestFSM_DeregisterService
--- PASS: TestFSM_DeregisterService (0.00s)
=== RUN   TestFSM_DeregisterCheck
--- PASS: TestFSM_DeregisterCheck (0.00s)
=== RUN   TestFSM_DeregisterNode
--- PASS: TestFSM_DeregisterNode (0.01s)
=== RUN   TestFSM_SnapshotRestore
2016/06/23 07:50:31 [INFO] consul.fsm: snapshot created in 137.671µs
--- PASS: TestFSM_SnapshotRestore (0.02s)
=== RUN   TestFSM_KVSSet
--- PASS: TestFSM_KVSSet (0.00s)
=== RUN   TestFSM_KVSDelete
--- PASS: TestFSM_KVSDelete (0.01s)
=== RUN   TestFSM_KVSDeleteTree
--- PASS: TestFSM_KVSDeleteTree (0.00s)
=== RUN   TestFSM_KVSDeleteCheckAndSet
--- PASS: TestFSM_KVSDeleteCheckAndSet (0.00s)
=== RUN   TestFSM_KVSCheckAndSet
--- PASS: TestFSM_KVSCheckAndSet (0.00s)
=== RUN   TestFSM_CoordinateUpdate
--- PASS: TestFSM_CoordinateUpdate (0.00s)
=== RUN   TestFSM_SessionCreate_Destroy
--- PASS: TestFSM_SessionCreate_Destroy (0.01s)
=== RUN   TestFSM_KVSLock
--- PASS: TestFSM_KVSLock (0.00s)
=== RUN   TestFSM_KVSUnlock
--- PASS: TestFSM_KVSUnlock (0.00s)
=== RUN   TestFSM_ACL_Set_Delete
--- PASS: TestFSM_ACL_Set_Delete (0.00s)
=== RUN   TestFSM_PreparedQuery_CRUD
--- PASS: TestFSM_PreparedQuery_CRUD (0.01s)
=== RUN   TestFSM_TombstoneReap
--- PASS: TestFSM_TombstoneReap (0.00s)
=== RUN   TestFSM_IgnoreUnknown
2016/06/23 07:50:31 [WARN] consul.fsm: ignoring unknown message type (64), upgrade to newer version
--- PASS: TestFSM_IgnoreUnknown (0.00s)
=== RUN   TestHealth_ChecksInState
2016/06/23 07:50:31 [INFO] raft: Node at 127.0.0.1:15314 [Follower] entering Follower state
2016/06/23 07:50:31 [INFO] serf: EventMemberJoin: Node 15313 127.0.0.1
2016/06/23 07:50:31 [INFO] consul: adding LAN server Node 15313 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/06/23 07:50:31 [INFO] serf: EventMemberJoin: Node 15313.dc1 127.0.0.1
2016/06/23 07:50:31 [INFO] consul: adding WAN server Node 15313.dc1 (Addr: 127.0.0.1:15314) (DC: dc1)
2016/06/23 07:50:31 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:31 [INFO] raft: Node at 127.0.0.1:15314 [Candidate] entering Candidate state
2016/06/23 07:50:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:32 [DEBUG] raft: Vote granted from 127.0.0.1:15314. Tally: 1
2016/06/23 07:50:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:32 [INFO] raft: Node at 127.0.0.1:15314 [Leader] entering Leader state
2016/06/23 07:50:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:32 [INFO] consul: New leader elected: Node 15313
2016/06/23 07:50:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:32 [DEBUG] raft: Node 127.0.0.1:15314 updated peer set (2): [127.0.0.1:15314]
2016/06/23 07:50:32 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:32 [INFO] consul: member 'Node 15313' joined, marking health alive
2016/06/23 07:50:33 [INFO] consul: shutting down server
2016/06/23 07:50:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:33 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ChecksInState (2.26s)
=== RUN   TestHealth_ChecksInState_DistanceSort
2016/06/23 07:50:34 [INFO] raft: Node at 127.0.0.1:15318 [Follower] entering Follower state
2016/06/23 07:50:34 [INFO] serf: EventMemberJoin: Node 15317 127.0.0.1
2016/06/23 07:50:34 [INFO] consul: adding LAN server Node 15317 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/06/23 07:50:34 [INFO] serf: EventMemberJoin: Node 15317.dc1 127.0.0.1
2016/06/23 07:50:34 [INFO] consul: adding WAN server Node 15317.dc1 (Addr: 127.0.0.1:15318) (DC: dc1)
2016/06/23 07:50:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:34 [INFO] raft: Node at 127.0.0.1:15318 [Candidate] entering Candidate state
2016/06/23 07:50:35 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:35 [DEBUG] raft: Vote granted from 127.0.0.1:15318. Tally: 1
2016/06/23 07:50:35 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:35 [INFO] raft: Node at 127.0.0.1:15318 [Leader] entering Leader state
2016/06/23 07:50:35 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:35 [INFO] consul: New leader elected: Node 15317
2016/06/23 07:50:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:36 [DEBUG] raft: Node 127.0.0.1:15318 updated peer set (2): [127.0.0.1:15318]
2016/06/23 07:50:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:36 [INFO] consul: member 'Node 15317' joined, marking health alive
2016/06/23 07:50:37 [INFO] consul: shutting down server
2016/06/23 07:50:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:37 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:50:37 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ChecksInState_DistanceSort (4.25s)
=== RUN   TestHealth_NodeChecks
2016/06/23 07:50:38 [INFO] serf: EventMemberJoin: Node 15321 127.0.0.1
2016/06/23 07:50:38 [INFO] consul: adding LAN server Node 15321 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/06/23 07:50:38 [INFO] serf: EventMemberJoin: Node 15321.dc1 127.0.0.1
2016/06/23 07:50:38 [INFO] raft: Node at 127.0.0.1:15322 [Follower] entering Follower state
2016/06/23 07:50:38 [INFO] consul: adding WAN server Node 15321.dc1 (Addr: 127.0.0.1:15322) (DC: dc1)
2016/06/23 07:50:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:38 [INFO] raft: Node at 127.0.0.1:15322 [Candidate] entering Candidate state
2016/06/23 07:50:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:39 [DEBUG] raft: Vote granted from 127.0.0.1:15322. Tally: 1
2016/06/23 07:50:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:39 [INFO] raft: Node at 127.0.0.1:15322 [Leader] entering Leader state
2016/06/23 07:50:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:39 [INFO] consul: New leader elected: Node 15321
2016/06/23 07:50:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:40 [DEBUG] raft: Node 127.0.0.1:15322 updated peer set (2): [127.0.0.1:15322]
2016/06/23 07:50:40 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:40 [INFO] consul: member 'Node 15321' joined, marking health alive
2016/06/23 07:50:40 [INFO] consul: shutting down server
2016/06/23 07:50:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:40 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_NodeChecks (2.85s)
=== RUN   TestHealth_ServiceChecks
2016/06/23 07:50:42 [INFO] raft: Node at 127.0.0.1:15326 [Follower] entering Follower state
2016/06/23 07:50:42 [INFO] serf: EventMemberJoin: Node 15325 127.0.0.1
2016/06/23 07:50:42 [INFO] consul: adding LAN server Node 15325 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/06/23 07:50:42 [INFO] serf: EventMemberJoin: Node 15325.dc1 127.0.0.1
2016/06/23 07:50:42 [INFO] consul: adding WAN server Node 15325.dc1 (Addr: 127.0.0.1:15326) (DC: dc1)
2016/06/23 07:50:42 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:42 [INFO] raft: Node at 127.0.0.1:15326 [Candidate] entering Candidate state
2016/06/23 07:50:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:42 [DEBUG] raft: Vote granted from 127.0.0.1:15326. Tally: 1
2016/06/23 07:50:42 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:42 [INFO] raft: Node at 127.0.0.1:15326 [Leader] entering Leader state
2016/06/23 07:50:42 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:42 [INFO] consul: New leader elected: Node 15325
2016/06/23 07:50:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:42 [DEBUG] raft: Node 127.0.0.1:15326 updated peer set (2): [127.0.0.1:15326]
2016/06/23 07:50:43 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:43 [INFO] consul: member 'Node 15325' joined, marking health alive
2016/06/23 07:50:43 [INFO] consul: shutting down server
2016/06/23 07:50:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:43 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:44 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceChecks (3.31s)
=== RUN   TestHealth_ServiceChecks_DistanceSort
2016/06/23 07:50:44 [INFO] raft: Node at 127.0.0.1:15330 [Follower] entering Follower state
2016/06/23 07:50:44 [INFO] serf: EventMemberJoin: Node 15329 127.0.0.1
2016/06/23 07:50:44 [INFO] consul: adding LAN server Node 15329 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/06/23 07:50:44 [INFO] serf: EventMemberJoin: Node 15329.dc1 127.0.0.1
2016/06/23 07:50:44 [INFO] consul: adding WAN server Node 15329.dc1 (Addr: 127.0.0.1:15330) (DC: dc1)
2016/06/23 07:50:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:44 [INFO] raft: Node at 127.0.0.1:15330 [Candidate] entering Candidate state
2016/06/23 07:50:45 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:45 [DEBUG] raft: Vote granted from 127.0.0.1:15330. Tally: 1
2016/06/23 07:50:45 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:45 [INFO] raft: Node at 127.0.0.1:15330 [Leader] entering Leader state
2016/06/23 07:50:45 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:45 [INFO] consul: New leader elected: Node 15329
2016/06/23 07:50:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:45 [DEBUG] raft: Node 127.0.0.1:15330 updated peer set (2): [127.0.0.1:15330]
2016/06/23 07:50:45 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:45 [INFO] consul: member 'Node 15329' joined, marking health alive
2016/06/23 07:50:46 [INFO] consul: shutting down server
2016/06/23 07:50:46 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:47 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:47 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:50:47 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceChecks_DistanceSort (3.05s)
=== RUN   TestHealth_ServiceNodes
2016/06/23 07:50:47 [INFO] raft: Node at 127.0.0.1:15334 [Follower] entering Follower state
2016/06/23 07:50:47 [INFO] serf: EventMemberJoin: Node 15333 127.0.0.1
2016/06/23 07:50:47 [INFO] consul: adding LAN server Node 15333 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/06/23 07:50:47 [INFO] serf: EventMemberJoin: Node 15333.dc1 127.0.0.1
2016/06/23 07:50:47 [INFO] consul: adding WAN server Node 15333.dc1 (Addr: 127.0.0.1:15334) (DC: dc1)
2016/06/23 07:50:47 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:47 [INFO] raft: Node at 127.0.0.1:15334 [Candidate] entering Candidate state
2016/06/23 07:50:48 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:48 [DEBUG] raft: Vote granted from 127.0.0.1:15334. Tally: 1
2016/06/23 07:50:48 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:48 [INFO] raft: Node at 127.0.0.1:15334 [Leader] entering Leader state
2016/06/23 07:50:48 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:48 [INFO] consul: New leader elected: Node 15333
2016/06/23 07:50:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:48 [DEBUG] raft: Node 127.0.0.1:15334 updated peer set (2): [127.0.0.1:15334]
2016/06/23 07:50:48 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:48 [INFO] consul: member 'Node 15333' joined, marking health alive
2016/06/23 07:50:50 [INFO] consul: shutting down server
2016/06/23 07:50:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:50 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceNodes (3.39s)
=== RUN   TestHealth_ServiceNodes_DistanceSort
2016/06/23 07:50:51 [INFO] raft: Node at 127.0.0.1:15338 [Follower] entering Follower state
2016/06/23 07:50:51 [INFO] serf: EventMemberJoin: Node 15337 127.0.0.1
2016/06/23 07:50:51 [INFO] consul: adding LAN server Node 15337 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/06/23 07:50:51 [INFO] serf: EventMemberJoin: Node 15337.dc1 127.0.0.1
2016/06/23 07:50:51 [INFO] consul: adding WAN server Node 15337.dc1 (Addr: 127.0.0.1:15338) (DC: dc1)
2016/06/23 07:50:51 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:51 [INFO] raft: Node at 127.0.0.1:15338 [Candidate] entering Candidate state
2016/06/23 07:50:51 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:51 [DEBUG] raft: Vote granted from 127.0.0.1:15338. Tally: 1
2016/06/23 07:50:51 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:51 [INFO] raft: Node at 127.0.0.1:15338 [Leader] entering Leader state
2016/06/23 07:50:51 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:51 [INFO] consul: New leader elected: Node 15337
2016/06/23 07:50:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:52 [DEBUG] raft: Node 127.0.0.1:15338 updated peer set (2): [127.0.0.1:15338]
2016/06/23 07:50:52 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:52 [INFO] consul: member 'Node 15337' joined, marking health alive
2016/06/23 07:50:53 [INFO] consul: shutting down server
2016/06/23 07:50:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:53 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ServiceNodes_DistanceSort (3.36s)
=== RUN   TestHealth_NodeChecks_FilterACL
2016/06/23 07:50:54 [INFO] raft: Node at 127.0.0.1:15342 [Follower] entering Follower state
2016/06/23 07:50:54 [INFO] serf: EventMemberJoin: Node 15341 127.0.0.1
2016/06/23 07:50:54 [INFO] consul: adding LAN server Node 15341 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/06/23 07:50:54 [INFO] serf: EventMemberJoin: Node 15341.dc1 127.0.0.1
2016/06/23 07:50:54 [INFO] consul: adding WAN server Node 15341.dc1 (Addr: 127.0.0.1:15342) (DC: dc1)
2016/06/23 07:50:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:54 [INFO] raft: Node at 127.0.0.1:15342 [Candidate] entering Candidate state
2016/06/23 07:50:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:55 [DEBUG] raft: Vote granted from 127.0.0.1:15342. Tally: 1
2016/06/23 07:50:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:55 [INFO] raft: Node at 127.0.0.1:15342 [Leader] entering Leader state
2016/06/23 07:50:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:55 [INFO] consul: New leader elected: Node 15341
2016/06/23 07:50:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:55 [DEBUG] raft: Node 127.0.0.1:15342 updated peer set (2): [127.0.0.1:15342]
2016/06/23 07:50:55 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:55 [INFO] consul: member 'Node 15341' joined, marking health alive
2016/06/23 07:50:57 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/06/23 07:50:57 [INFO] consul: shutting down server
2016/06/23 07:50:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:57 [WARN] serf: Shutdown without a Leave
2016/06/23 07:50:57 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_NodeChecks_FilterACL (3.58s)
=== RUN   TestHealth_ServiceChecks_FilterACL
2016/06/23 07:50:58 [INFO] raft: Node at 127.0.0.1:15346 [Follower] entering Follower state
2016/06/23 07:50:58 [INFO] serf: EventMemberJoin: Node 15345 127.0.0.1
2016/06/23 07:50:58 [INFO] consul: adding LAN server Node 15345 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/06/23 07:50:58 [INFO] serf: EventMemberJoin: Node 15345.dc1 127.0.0.1
2016/06/23 07:50:58 [INFO] consul: adding WAN server Node 15345.dc1 (Addr: 127.0.0.1:15346) (DC: dc1)
2016/06/23 07:50:58 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:50:58 [INFO] raft: Node at 127.0.0.1:15346 [Candidate] entering Candidate state
2016/06/23 07:50:58 [DEBUG] raft: Votes needed: 1
2016/06/23 07:50:58 [DEBUG] raft: Vote granted from 127.0.0.1:15346. Tally: 1
2016/06/23 07:50:58 [INFO] raft: Election won. Tally: 1
2016/06/23 07:50:58 [INFO] raft: Node at 127.0.0.1:15346 [Leader] entering Leader state
2016/06/23 07:50:58 [INFO] consul: cluster leadership acquired
2016/06/23 07:50:58 [INFO] consul: New leader elected: Node 15345
2016/06/23 07:50:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:50:59 [DEBUG] raft: Node 127.0.0.1:15346 updated peer set (2): [127.0.0.1:15346]
2016/06/23 07:50:59 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:50:59 [INFO] consul: member 'Node 15345' joined, marking health alive
2016/06/23 07:51:00 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/06/23 07:51:00 [INFO] consul: shutting down server
2016/06/23 07:51:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestHealth_ServiceChecks_FilterACL (3.81s)
=== RUN   TestHealth_ServiceNodes_FilterACL
2016/06/23 07:51:02 [INFO] raft: Node at 127.0.0.1:15350 [Follower] entering Follower state
2016/06/23 07:51:02 [INFO] serf: EventMemberJoin: Node 15349 127.0.0.1
2016/06/23 07:51:02 [INFO] consul: adding LAN server Node 15349 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/06/23 07:51:02 [INFO] serf: EventMemberJoin: Node 15349.dc1 127.0.0.1
2016/06/23 07:51:02 [INFO] consul: adding WAN server Node 15349.dc1 (Addr: 127.0.0.1:15350) (DC: dc1)
2016/06/23 07:51:02 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:02 [INFO] raft: Node at 127.0.0.1:15350 [Candidate] entering Candidate state
2016/06/23 07:51:03 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:03 [DEBUG] raft: Vote granted from 127.0.0.1:15350. Tally: 1
2016/06/23 07:51:03 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:03 [INFO] raft: Node at 127.0.0.1:15350 [Leader] entering Leader state
2016/06/23 07:51:03 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:03 [INFO] consul: New leader elected: Node 15349
2016/06/23 07:51:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:03 [DEBUG] raft: Node 127.0.0.1:15350 updated peer set (2): [127.0.0.1:15350]
2016/06/23 07:51:03 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:03 [INFO] consul: member 'Node 15349' joined, marking health alive
2016/06/23 07:51:05 [DEBUG] consul: dropping node "Node 15349" from result due to ACLs
2016/06/23 07:51:05 [INFO] consul: shutting down server
2016/06/23 07:51:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestHealth_ServiceNodes_FilterACL (4.20s)
=== RUN   TestHealth_ChecksInState_FilterACL
2016/06/23 07:51:06 [INFO] raft: Node at 127.0.0.1:15354 [Follower] entering Follower state
2016/06/23 07:51:06 [INFO] serf: EventMemberJoin: Node 15353 127.0.0.1
2016/06/23 07:51:06 [INFO] consul: adding LAN server Node 15353 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/06/23 07:51:06 [INFO] serf: EventMemberJoin: Node 15353.dc1 127.0.0.1
2016/06/23 07:51:06 [INFO] consul: adding WAN server Node 15353.dc1 (Addr: 127.0.0.1:15354) (DC: dc1)
2016/06/23 07:51:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:06 [INFO] raft: Node at 127.0.0.1:15354 [Candidate] entering Candidate state
2016/06/23 07:51:07 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:07 [DEBUG] raft: Vote granted from 127.0.0.1:15354. Tally: 1
2016/06/23 07:51:07 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:07 [INFO] raft: Node at 127.0.0.1:15354 [Leader] entering Leader state
2016/06/23 07:51:07 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:07 [INFO] consul: New leader elected: Node 15353
2016/06/23 07:51:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:07 [DEBUG] raft: Node 127.0.0.1:15354 updated peer set (2): [127.0.0.1:15354]
2016/06/23 07:51:07 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:08 [INFO] consul: member 'Node 15353' joined, marking health alive
2016/06/23 07:51:09 [INFO] consul: shutting down server
2016/06/23 07:51:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:10 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:10 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestHealth_ChecksInState_FilterACL (4.80s)
=== RUN   TestInternal_NodeInfo
2016/06/23 07:51:11 [INFO] raft: Node at 127.0.0.1:15358 [Follower] entering Follower state
2016/06/23 07:51:11 [INFO] serf: EventMemberJoin: Node 15357 127.0.0.1
2016/06/23 07:51:11 [INFO] consul: adding LAN server Node 15357 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/06/23 07:51:11 [INFO] serf: EventMemberJoin: Node 15357.dc1 127.0.0.1
2016/06/23 07:51:11 [INFO] consul: adding WAN server Node 15357.dc1 (Addr: 127.0.0.1:15358) (DC: dc1)
2016/06/23 07:51:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:11 [INFO] raft: Node at 127.0.0.1:15358 [Candidate] entering Candidate state
2016/06/23 07:51:11 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:11 [DEBUG] raft: Vote granted from 127.0.0.1:15358. Tally: 1
2016/06/23 07:51:11 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:11 [INFO] raft: Node at 127.0.0.1:15358 [Leader] entering Leader state
2016/06/23 07:51:11 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:11 [INFO] consul: New leader elected: Node 15357
2016/06/23 07:51:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:12 [DEBUG] raft: Node 127.0.0.1:15358 updated peer set (2): [127.0.0.1:15358]
2016/06/23 07:51:12 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:12 [INFO] consul: member 'Node 15357' joined, marking health alive
2016/06/23 07:51:13 [INFO] consul: shutting down server
2016/06/23 07:51:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:13 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:13 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_NodeInfo (3.47s)
=== RUN   TestInternal_NodeDump
2016/06/23 07:51:14 [INFO] raft: Node at 127.0.0.1:15362 [Follower] entering Follower state
2016/06/23 07:51:14 [INFO] serf: EventMemberJoin: Node 15361 127.0.0.1
2016/06/23 07:51:14 [INFO] consul: adding LAN server Node 15361 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/06/23 07:51:14 [INFO] serf: EventMemberJoin: Node 15361.dc1 127.0.0.1
2016/06/23 07:51:14 [INFO] consul: adding WAN server Node 15361.dc1 (Addr: 127.0.0.1:15362) (DC: dc1)
2016/06/23 07:51:14 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:14 [INFO] raft: Node at 127.0.0.1:15362 [Candidate] entering Candidate state
2016/06/23 07:51:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:15 [DEBUG] raft: Vote granted from 127.0.0.1:15362. Tally: 1
2016/06/23 07:51:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:15 [INFO] raft: Node at 127.0.0.1:15362 [Leader] entering Leader state
2016/06/23 07:51:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:15 [INFO] consul: New leader elected: Node 15361
2016/06/23 07:51:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:15 [DEBUG] raft: Node 127.0.0.1:15362 updated peer set (2): [127.0.0.1:15362]
2016/06/23 07:51:15 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:15 [INFO] consul: member 'Node 15361' joined, marking health alive
2016/06/23 07:51:16 [INFO] consul: shutting down server
2016/06/23 07:51:16 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:16 [WARN] serf: Shutdown without a Leave
--- PASS: TestInternal_NodeDump (3.35s)
=== RUN   TestInternal_KeyringOperation
2016/06/23 07:51:17 [INFO] raft: Node at 127.0.0.1:15366 [Follower] entering Follower state
2016/06/23 07:51:17 [INFO] serf: EventMemberJoin: Node 15365 127.0.0.1
2016/06/23 07:51:17 [INFO] consul: adding LAN server Node 15365 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/06/23 07:51:17 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/06/23 07:51:17 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/06/23 07:51:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:17 [INFO] raft: Node at 127.0.0.1:15366 [Candidate] entering Candidate state
2016/06/23 07:51:18 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:18 [DEBUG] raft: Vote granted from 127.0.0.1:15366. Tally: 1
2016/06/23 07:51:18 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:18 [INFO] raft: Node at 127.0.0.1:15366 [Leader] entering Leader state
2016/06/23 07:51:18 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:18 [INFO] consul: New leader elected: Node 15365
2016/06/23 07:51:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:18 [DEBUG] raft: Node 127.0.0.1:15366 updated peer set (2): [127.0.0.1:15366]
2016/06/23 07:51:18 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:18 [INFO] consul: member 'Node 15365' joined, marking health alive
2016/06/23 07:51:18 [INFO] serf: Received list-keys query
2016/06/23 07:51:18 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/06/23 07:51:18 [INFO] serf: Received list-keys query
2016/06/23 07:51:18 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/06/23 07:51:19 [INFO] raft: Node at 127.0.0.1:15370 [Follower] entering Follower state
2016/06/23 07:51:19 [INFO] serf: EventMemberJoin: Node 15369 127.0.0.1
2016/06/23 07:51:19 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/06/23 07:51:19 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15368
2016/06/23 07:51:19 [INFO] consul: adding LAN server Node 15369 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/06/23 07:51:19 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/06/23 07:51:19 [DEBUG] memberlist: TCP connection from=127.0.0.1:34096
2016/06/23 07:51:19 [INFO] serf: EventMemberJoin: Node 15369.dc2 127.0.0.1
2016/06/23 07:51:19 [INFO] consul: adding WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/06/23 07:51:19 [INFO] serf: EventMemberJoin: Node 15365.dc1 127.0.0.1
2016/06/23 07:51:19 [INFO] consul: adding WAN server Node 15365.dc1 (Addr: 127.0.0.1:15366) (DC: dc1)
2016/06/23 07:51:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [INFO] raft: Node at 127.0.0.1:15370 [Candidate] entering Candidate state
2016/06/23 07:51:19 [INFO] serf: Received list-keys query
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [INFO] serf: Received list-keys query
2016/06/23 07:51:19 [DEBUG] serf: messageQueryResponseType: Node 15365.dc1
2016/06/23 07:51:19 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [INFO] serf: Received list-keys query
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageQueryResponseType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [INFO] serf: Received list-keys query
2016/06/23 07:51:19 [DEBUG] serf: messageQueryResponseType: Node 15365
2016/06/23 07:51:19 [INFO] serf: Received list-keys query
2016/06/23 07:51:19 [DEBUG] serf: messageQueryResponseType: Node 15369
2016/06/23 07:51:19 [INFO] consul: shutting down server
2016/06/23 07:51:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageJoinType: Node 15369.dc2
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [DEBUG] serf: messageQueryType: _serf_list-keys
2016/06/23 07:51:19 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:19 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/06/23 07:51:19 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/06/23 07:51:20 [DEBUG] memberlist: Failed UDP ping: Node 15369.dc2 (timeout reached)
2016/06/23 07:51:20 [INFO] memberlist: Suspect Node 15369.dc2 has failed, no acks received
2016/06/23 07:51:20 [INFO] memberlist: Marking Node 15369.dc2 as failed, suspect timeout reached
2016/06/23 07:51:20 [INFO] serf: EventMemberFailed: Node 15369.dc2 127.0.0.1
2016/06/23 07:51:20 [INFO] consul: removing WAN server Node 15369.dc2 (Addr: 127.0.0.1:15370) (DC: dc2)
2016/06/23 07:51:20 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:20 [INFO] consul: shutting down server
2016/06/23 07:51:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:20 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:20 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_KeyringOperation (3.44s)
=== RUN   TestInternal_NodeInfo_FilterACL
2016/06/23 07:51:21 [INFO] raft: Node at 127.0.0.1:15374 [Follower] entering Follower state
2016/06/23 07:51:21 [INFO] serf: EventMemberJoin: Node 15373 127.0.0.1
2016/06/23 07:51:21 [INFO] consul: adding LAN server Node 15373 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/06/23 07:51:21 [INFO] serf: EventMemberJoin: Node 15373.dc1 127.0.0.1
2016/06/23 07:51:21 [INFO] consul: adding WAN server Node 15373.dc1 (Addr: 127.0.0.1:15374) (DC: dc1)
2016/06/23 07:51:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:21 [INFO] raft: Node at 127.0.0.1:15374 [Candidate] entering Candidate state
2016/06/23 07:51:21 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:21 [DEBUG] raft: Vote granted from 127.0.0.1:15374. Tally: 1
2016/06/23 07:51:21 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:21 [INFO] raft: Node at 127.0.0.1:15374 [Leader] entering Leader state
2016/06/23 07:51:21 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:21 [INFO] consul: New leader elected: Node 15373
2016/06/23 07:51:21 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:22 [DEBUG] raft: Node 127.0.0.1:15374 updated peer set (2): [127.0.0.1:15374]
2016/06/23 07:51:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:22 [INFO] consul: member 'Node 15373' joined, marking health alive
2016/06/23 07:51:23 [DEBUG] consul: dropping check "service:bar" from result due to ACLs
2016/06/23 07:51:23 [INFO] consul: shutting down server
2016/06/23 07:51:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:24 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_NodeInfo_FilterACL (3.59s)
=== RUN   TestInternal_NodeDump_FilterACL
2016/06/23 07:51:24 [INFO] raft: Node at 127.0.0.1:15378 [Follower] entering Follower state
2016/06/23 07:51:24 [INFO] serf: EventMemberJoin: Node 15377 127.0.0.1
2016/06/23 07:51:24 [INFO] consul: adding LAN server Node 15377 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/06/23 07:51:24 [INFO] serf: EventMemberJoin: Node 15377.dc1 127.0.0.1
2016/06/23 07:51:24 [INFO] consul: adding WAN server Node 15377.dc1 (Addr: 127.0.0.1:15378) (DC: dc1)
2016/06/23 07:51:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:24 [INFO] raft: Node at 127.0.0.1:15378 [Candidate] entering Candidate state
2016/06/23 07:51:25 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:25 [DEBUG] raft: Vote granted from 127.0.0.1:15378. Tally: 1
2016/06/23 07:51:25 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:25 [INFO] raft: Node at 127.0.0.1:15378 [Leader] entering Leader state
2016/06/23 07:51:25 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:25 [INFO] consul: New leader elected: Node 15377
2016/06/23 07:51:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:25 [DEBUG] raft: Node 127.0.0.1:15378 updated peer set (2): [127.0.0.1:15378]
2016/06/23 07:51:25 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:26 [INFO] consul: member 'Node 15377' joined, marking health alive
2016/06/23 07:51:28 [INFO] consul: shutting down server
2016/06/23 07:51:28 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:28 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:28 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:28 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestInternal_NodeDump_FilterACL (4.18s)
=== RUN   TestInternal_EventFire_Token
2016/06/23 07:51:29 [INFO] raft: Node at 127.0.0.1:15382 [Follower] entering Follower state
2016/06/23 07:51:29 [INFO] serf: EventMemberJoin: Node 15381 127.0.0.1
2016/06/23 07:51:29 [INFO] consul: adding LAN server Node 15381 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/06/23 07:51:29 [INFO] serf: EventMemberJoin: Node 15381.dc1 127.0.0.1
2016/06/23 07:51:29 [INFO] consul: adding WAN server Node 15381.dc1 (Addr: 127.0.0.1:15382) (DC: dc1)
2016/06/23 07:51:29 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:29 [INFO] raft: Node at 127.0.0.1:15382 [Candidate] entering Candidate state
2016/06/23 07:51:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:30 [DEBUG] raft: Vote granted from 127.0.0.1:15382. Tally: 1
2016/06/23 07:51:30 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:30 [INFO] raft: Node at 127.0.0.1:15382 [Leader] entering Leader state
2016/06/23 07:51:30 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:30 [INFO] consul: New leader elected: Node 15381
2016/06/23 07:51:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:30 [DEBUG] raft: Node 127.0.0.1:15382 updated peer set (2): [127.0.0.1:15382]
2016/06/23 07:51:30 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:31 [INFO] consul: member 'Node 15381' joined, marking health alive
2016/06/23 07:51:31 [WARN] consul: user event "foo" blocked by ACLs
2016/06/23 07:51:31 [DEBUG] consul: user event: foo
2016/06/23 07:51:31 [INFO] consul: shutting down server
2016/06/23 07:51:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:31 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestInternal_EventFire_Token (3.48s)
=== RUN   TestHealthCheckRace
--- PASS: TestHealthCheckRace (0.00s)
=== RUN   TestKVS_Apply
2016/06/23 07:51:32 [INFO] raft: Node at 127.0.0.1:15386 [Follower] entering Follower state
2016/06/23 07:51:32 [INFO] serf: EventMemberJoin: Node 15385 127.0.0.1
2016/06/23 07:51:32 [INFO] consul: adding LAN server Node 15385 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/06/23 07:51:32 [INFO] serf: EventMemberJoin: Node 15385.dc1 127.0.0.1
2016/06/23 07:51:32 [INFO] consul: adding WAN server Node 15385.dc1 (Addr: 127.0.0.1:15386) (DC: dc1)
2016/06/23 07:51:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:32 [INFO] raft: Node at 127.0.0.1:15386 [Candidate] entering Candidate state
2016/06/23 07:51:33 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:33 [DEBUG] raft: Vote granted from 127.0.0.1:15386. Tally: 1
2016/06/23 07:51:33 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:33 [INFO] raft: Node at 127.0.0.1:15386 [Leader] entering Leader state
2016/06/23 07:51:33 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:33 [INFO] consul: New leader elected: Node 15385
2016/06/23 07:51:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:33 [DEBUG] raft: Node 127.0.0.1:15386 updated peer set (2): [127.0.0.1:15386]
2016/06/23 07:51:33 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:33 [INFO] consul: member 'Node 15385' joined, marking health alive
2016/06/23 07:51:34 [INFO] consul: shutting down server
2016/06/23 07:51:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:34 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Apply (3.01s)
=== RUN   TestKVS_Apply_ACLDeny
2016/06/23 07:51:35 [INFO] raft: Node at 127.0.0.1:15390 [Follower] entering Follower state
2016/06/23 07:51:35 [INFO] serf: EventMemberJoin: Node 15389 127.0.0.1
2016/06/23 07:51:35 [INFO] consul: adding LAN server Node 15389 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/06/23 07:51:35 [INFO] serf: EventMemberJoin: Node 15389.dc1 127.0.0.1
2016/06/23 07:51:35 [INFO] consul: adding WAN server Node 15389.dc1 (Addr: 127.0.0.1:15390) (DC: dc1)
2016/06/23 07:51:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:35 [INFO] raft: Node at 127.0.0.1:15390 [Candidate] entering Candidate state
2016/06/23 07:51:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:36 [DEBUG] raft: Vote granted from 127.0.0.1:15390. Tally: 1
2016/06/23 07:51:36 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:36 [INFO] raft: Node at 127.0.0.1:15390 [Leader] entering Leader state
2016/06/23 07:51:36 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:36 [INFO] consul: New leader elected: Node 15389
2016/06/23 07:51:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:36 [DEBUG] raft: Node 127.0.0.1:15390 updated peer set (2): [127.0.0.1:15390]
2016/06/23 07:51:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:36 [INFO] consul: member 'Node 15389' joined, marking health alive
2016/06/23 07:51:37 [INFO] consul: shutting down server
2016/06/23 07:51:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:37 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:37 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Apply_ACLDeny (3.02s)
=== RUN   TestKVS_Get
2016/06/23 07:51:38 [INFO] raft: Node at 127.0.0.1:15394 [Follower] entering Follower state
2016/06/23 07:51:38 [INFO] serf: EventMemberJoin: Node 15393 127.0.0.1
2016/06/23 07:51:38 [INFO] serf: EventMemberJoin: Node 15393.dc1 127.0.0.1
2016/06/23 07:51:38 [INFO] consul: adding WAN server Node 15393.dc1 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/06/23 07:51:38 [INFO] consul: adding LAN server Node 15393 (Addr: 127.0.0.1:15394) (DC: dc1)
2016/06/23 07:51:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:38 [INFO] raft: Node at 127.0.0.1:15394 [Candidate] entering Candidate state
2016/06/23 07:51:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:39 [DEBUG] raft: Vote granted from 127.0.0.1:15394. Tally: 1
2016/06/23 07:51:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:39 [INFO] raft: Node at 127.0.0.1:15394 [Leader] entering Leader state
2016/06/23 07:51:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:39 [INFO] consul: New leader elected: Node 15393
2016/06/23 07:51:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:39 [DEBUG] raft: Node 127.0.0.1:15394 updated peer set (2): [127.0.0.1:15394]
2016/06/23 07:51:39 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:39 [INFO] consul: member 'Node 15393' joined, marking health alive
2016/06/23 07:51:40 [INFO] consul: shutting down server
2016/06/23 07:51:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:40 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVS_Get (2.78s)
=== RUN   TestKVS_Get_ACLDeny
2016/06/23 07:51:41 [INFO] raft: Node at 127.0.0.1:15398 [Follower] entering Follower state
2016/06/23 07:51:41 [INFO] serf: EventMemberJoin: Node 15397 127.0.0.1
2016/06/23 07:51:41 [INFO] serf: EventMemberJoin: Node 15397.dc1 127.0.0.1
2016/06/23 07:51:41 [INFO] consul: adding LAN server Node 15397 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/06/23 07:51:41 [INFO] consul: adding WAN server Node 15397.dc1 (Addr: 127.0.0.1:15398) (DC: dc1)
2016/06/23 07:51:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:41 [INFO] raft: Node at 127.0.0.1:15398 [Candidate] entering Candidate state
2016/06/23 07:51:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:42 [DEBUG] raft: Vote granted from 127.0.0.1:15398. Tally: 1
2016/06/23 07:51:42 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:42 [INFO] raft: Node at 127.0.0.1:15398 [Leader] entering Leader state
2016/06/23 07:51:42 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:42 [INFO] consul: New leader elected: Node 15397
2016/06/23 07:51:43 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:43 [DEBUG] raft: Node 127.0.0.1:15398 updated peer set (2): [127.0.0.1:15398]
2016/06/23 07:51:43 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:44 [INFO] consul: member 'Node 15397' joined, marking health alive
2016/06/23 07:51:45 [INFO] consul: shutting down server
2016/06/23 07:51:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:45 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:45 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVS_Get_ACLDeny (4.81s)
=== RUN   TestKVSEndpoint_List
2016/06/23 07:51:46 [INFO] raft: Node at 127.0.0.1:15402 [Follower] entering Follower state
2016/06/23 07:51:46 [INFO] serf: EventMemberJoin: Node 15401 127.0.0.1
2016/06/23 07:51:46 [INFO] consul: adding LAN server Node 15401 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/06/23 07:51:46 [INFO] serf: EventMemberJoin: Node 15401.dc1 127.0.0.1
2016/06/23 07:51:46 [INFO] consul: adding WAN server Node 15401.dc1 (Addr: 127.0.0.1:15402) (DC: dc1)
2016/06/23 07:51:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:46 [INFO] raft: Node at 127.0.0.1:15402 [Candidate] entering Candidate state
2016/06/23 07:51:46 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:46 [DEBUG] raft: Vote granted from 127.0.0.1:15402. Tally: 1
2016/06/23 07:51:46 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:46 [INFO] raft: Node at 127.0.0.1:15402 [Leader] entering Leader state
2016/06/23 07:51:46 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:46 [INFO] consul: New leader elected: Node 15401
2016/06/23 07:51:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:47 [DEBUG] raft: Node 127.0.0.1:15402 updated peer set (2): [127.0.0.1:15402]
2016/06/23 07:51:47 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:47 [INFO] consul: member 'Node 15401' joined, marking health alive
2016/06/23 07:51:48 [INFO] consul: shutting down server
2016/06/23 07:51:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:48 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:48 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_List (3.55s)
=== RUN   TestKVSEndpoint_List_Blocking
2016/06/23 07:51:49 [INFO] raft: Node at 127.0.0.1:15406 [Follower] entering Follower state
2016/06/23 07:51:49 [INFO] serf: EventMemberJoin: Node 15405 127.0.0.1
2016/06/23 07:51:49 [INFO] consul: adding LAN server Node 15405 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/06/23 07:51:49 [INFO] serf: EventMemberJoin: Node 15405.dc1 127.0.0.1
2016/06/23 07:51:49 [INFO] consul: adding WAN server Node 15405.dc1 (Addr: 127.0.0.1:15406) (DC: dc1)
2016/06/23 07:51:49 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:49 [INFO] raft: Node at 127.0.0.1:15406 [Candidate] entering Candidate state
2016/06/23 07:51:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:50 [DEBUG] raft: Vote granted from 127.0.0.1:15406. Tally: 1
2016/06/23 07:51:50 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:50 [INFO] raft: Node at 127.0.0.1:15406 [Leader] entering Leader state
2016/06/23 07:51:50 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:50 [INFO] consul: New leader elected: Node 15405
2016/06/23 07:51:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:51 [DEBUG] raft: Node 127.0.0.1:15406 updated peer set (2): [127.0.0.1:15406]
2016/06/23 07:51:51 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:51 [INFO] consul: member 'Node 15405' joined, marking health alive
2016/06/23 07:51:53 [INFO] consul: shutting down server
2016/06/23 07:51:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:53 [WARN] serf: Shutdown without a Leave
2016/06/23 07:51:53 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:51:53 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_List_Blocking (4.99s)
=== RUN   TestKVSEndpoint_List_ACLDeny
2016/06/23 07:51:54 [INFO] raft: Node at 127.0.0.1:15410 [Follower] entering Follower state
2016/06/23 07:51:54 [INFO] serf: EventMemberJoin: Node 15409 127.0.0.1
2016/06/23 07:51:54 [INFO] consul: adding LAN server Node 15409 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/06/23 07:51:54 [INFO] serf: EventMemberJoin: Node 15409.dc1 127.0.0.1
2016/06/23 07:51:54 [INFO] consul: adding WAN server Node 15409.dc1 (Addr: 127.0.0.1:15410) (DC: dc1)
2016/06/23 07:51:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:51:54 [INFO] raft: Node at 127.0.0.1:15410 [Candidate] entering Candidate state
2016/06/23 07:51:55 [DEBUG] raft: Votes needed: 1
2016/06/23 07:51:55 [DEBUG] raft: Vote granted from 127.0.0.1:15410. Tally: 1
2016/06/23 07:51:55 [INFO] raft: Election won. Tally: 1
2016/06/23 07:51:55 [INFO] raft: Node at 127.0.0.1:15410 [Leader] entering Leader state
2016/06/23 07:51:55 [INFO] consul: cluster leadership acquired
2016/06/23 07:51:55 [INFO] consul: New leader elected: Node 15409
2016/06/23 07:51:55 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:51:55 [DEBUG] raft: Node 127.0.0.1:15410 updated peer set (2): [127.0.0.1:15410]
2016/06/23 07:51:55 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:51:56 [INFO] consul: member 'Node 15409' joined, marking health alive
2016/06/23 07:52:00 [INFO] consul: shutting down server
2016/06/23 07:52:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:00 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:01 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestKVSEndpoint_List_ACLDeny (7.08s)
=== RUN   TestKVSEndpoint_ListKeys
2016/06/23 07:52:01 [INFO] raft: Node at 127.0.0.1:15414 [Follower] entering Follower state
2016/06/23 07:52:01 [INFO] serf: EventMemberJoin: Node 15413 127.0.0.1
2016/06/23 07:52:01 [INFO] consul: adding LAN server Node 15413 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/06/23 07:52:01 [INFO] serf: EventMemberJoin: Node 15413.dc1 127.0.0.1
2016/06/23 07:52:01 [INFO] consul: adding WAN server Node 15413.dc1 (Addr: 127.0.0.1:15414) (DC: dc1)
2016/06/23 07:52:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:01 [INFO] raft: Node at 127.0.0.1:15414 [Candidate] entering Candidate state
2016/06/23 07:52:02 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:02 [DEBUG] raft: Vote granted from 127.0.0.1:15414. Tally: 1
2016/06/23 07:52:02 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:02 [INFO] raft: Node at 127.0.0.1:15414 [Leader] entering Leader state
2016/06/23 07:52:02 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:02 [INFO] consul: New leader elected: Node 15413
2016/06/23 07:52:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:02 [DEBUG] raft: Node 127.0.0.1:15414 updated peer set (2): [127.0.0.1:15414]
2016/06/23 07:52:02 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:02 [INFO] consul: member 'Node 15413' joined, marking health alive
2016/06/23 07:52:04 [INFO] consul: shutting down server
2016/06/23 07:52:04 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:52:05 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_ListKeys (4.18s)
=== RUN   TestKVSEndpoint_ListKeys_ACLDeny
2016/06/23 07:52:05 [INFO] raft: Node at 127.0.0.1:15418 [Follower] entering Follower state
2016/06/23 07:52:05 [INFO] serf: EventMemberJoin: Node 15417 127.0.0.1
2016/06/23 07:52:05 [INFO] consul: adding LAN server Node 15417 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/06/23 07:52:05 [INFO] serf: EventMemberJoin: Node 15417.dc1 127.0.0.1
2016/06/23 07:52:05 [INFO] consul: adding WAN server Node 15417.dc1 (Addr: 127.0.0.1:15418) (DC: dc1)
2016/06/23 07:52:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:05 [INFO] raft: Node at 127.0.0.1:15418 [Candidate] entering Candidate state
2016/06/23 07:52:06 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:06 [DEBUG] raft: Vote granted from 127.0.0.1:15418. Tally: 1
2016/06/23 07:52:06 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:06 [INFO] raft: Node at 127.0.0.1:15418 [Leader] entering Leader state
2016/06/23 07:52:06 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:06 [INFO] consul: New leader elected: Node 15417
2016/06/23 07:52:06 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:07 [DEBUG] raft: Node 127.0.0.1:15418 updated peer set (2): [127.0.0.1:15418]
2016/06/23 07:52:07 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:07 [INFO] consul: member 'Node 15417' joined, marking health alive
2016/06/23 07:52:11 [INFO] consul: shutting down server
2016/06/23 07:52:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:11 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:11 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestKVSEndpoint_ListKeys_ACLDeny (6.31s)
=== RUN   TestKVS_Apply_LockDelay
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:15422 [Follower] entering Follower state
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 15421 127.0.0.1
2016/06/23 07:52:12 [INFO] consul: adding LAN server Node 15421 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/06/23 07:52:12 [INFO] serf: EventMemberJoin: Node 15421.dc1 127.0.0.1
2016/06/23 07:52:12 [INFO] consul: adding WAN server Node 15421.dc1 (Addr: 127.0.0.1:15422) (DC: dc1)
2016/06/23 07:52:12 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:15422 [Candidate] entering Candidate state
2016/06/23 07:52:12 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:12 [DEBUG] raft: Vote granted from 127.0.0.1:15422. Tally: 1
2016/06/23 07:52:12 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:12 [INFO] raft: Node at 127.0.0.1:15422 [Leader] entering Leader state
2016/06/23 07:52:12 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:12 [INFO] consul: New leader elected: Node 15421
2016/06/23 07:52:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:13 [DEBUG] raft: Node 127.0.0.1:15422 updated peer set (2): [127.0.0.1:15422]
2016/06/23 07:52:13 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:13 [INFO] consul: member 'Node 15421' joined, marking health alive
2016/06/23 07:52:13 [WARN] consul.kvs: Rejecting lock of test due to lock-delay until 2016-06-23 07:52:13.337636374 +0000 UTC
2016/06/23 07:52:14 [INFO] consul: shutting down server
2016/06/23 07:52:14 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestKVS_Apply_LockDelay (2.80s)
=== RUN   TestKVS_Issue_1626
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:15426 [Follower] entering Follower state
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 15425 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: adding LAN server Node 15425 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/06/23 07:52:15 [INFO] serf: EventMemberJoin: Node 15425.dc1 127.0.0.1
2016/06/23 07:52:15 [INFO] consul: adding WAN server Node 15425.dc1 (Addr: 127.0.0.1:15426) (DC: dc1)
2016/06/23 07:52:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:15426 [Candidate] entering Candidate state
2016/06/23 07:52:15 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:15 [DEBUG] raft: Vote granted from 127.0.0.1:15426. Tally: 1
2016/06/23 07:52:15 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:15 [INFO] raft: Node at 127.0.0.1:15426 [Leader] entering Leader state
2016/06/23 07:52:15 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:15 [INFO] consul: New leader elected: Node 15425
2016/06/23 07:52:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:15 [DEBUG] raft: Node 127.0.0.1:15426 updated peer set (2): [127.0.0.1:15426]
2016/06/23 07:52:15 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:15 [INFO] consul: member 'Node 15425' joined, marking health alive
2016/06/23 07:52:18 [INFO] consul: shutting down server
2016/06/23 07:52:18 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestKVS_Issue_1626 (4.50s)
=== RUN   TestLeader_RegisterMember
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:15430 [Follower] entering Follower state
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 15429 127.0.0.1
2016/06/23 07:52:19 [INFO] consul: adding LAN server Node 15429 (Addr: 127.0.0.1:15430) (DC: dc1)
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 15429.dc1 127.0.0.1
2016/06/23 07:52:19 [INFO] consul: adding WAN server Node 15429.dc1 (Addr: 127.0.0.1:15430) (DC: dc1)
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:19 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15431
2016/06/23 07:52:19 [DEBUG] memberlist: TCP connection from=127.0.0.1:57232
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:19 [INFO] serf: EventMemberJoin: Node 15429 127.0.0.1
2016/06/23 07:52:19 [INFO] consul: adding server Node 15429 (Addr: 127.0.0.1:15430) (DC: dc1)
2016/06/23 07:52:19 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:15430 [Candidate] entering Candidate state
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:19 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:19 [DEBUG] raft: Vote granted from 127.0.0.1:15430. Tally: 1
2016/06/23 07:52:19 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:19 [INFO] raft: Node at 127.0.0.1:15430 [Leader] entering Leader state
2016/06/23 07:52:19 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:19 [INFO] consul: New leader elected: Node 15429
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [INFO] consul: New leader elected: Node 15429
2016/06/23 07:52:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:20 [DEBUG] raft: Node 127.0.0.1:15430 updated peer set (2): [127.0.0.1:15430]
2016/06/23 07:52:20 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:20 [INFO] consul: member 'Node 15429' joined, marking health alive
2016/06/23 07:52:20 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/06/23 07:52:20 [INFO] consul: shutting down client
2016/06/23 07:52:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:20 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:52:20 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:52:20 [INFO] consul: shutting down server
2016/06/23 07:52:20 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:21 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:21 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:52:21 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:52:21 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:52:21 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestLeader_RegisterMember (2.29s)
=== RUN   TestLeader_FailedMember
2016/06/23 07:52:21 [INFO] raft: Node at 127.0.0.1:15436 [Follower] entering Follower state
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 15435 127.0.0.1
2016/06/23 07:52:21 [INFO] consul: adding LAN server Node 15435 (Addr: 127.0.0.1:15436) (DC: dc1)
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: Node 15435.dc1 127.0.0.1
2016/06/23 07:52:21 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:21 [INFO] consul: adding WAN server Node 15435.dc1 (Addr: 127.0.0.1:15436) (DC: dc1)
2016/06/23 07:52:21 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:21 [INFO] raft: Node at 127.0.0.1:15436 [Candidate] entering Candidate state
2016/06/23 07:52:22 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:22 [DEBUG] raft: Vote granted from 127.0.0.1:15436. Tally: 1
2016/06/23 07:52:22 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:22 [INFO] raft: Node at 127.0.0.1:15436 [Leader] entering Leader state
2016/06/23 07:52:22 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:22 [INFO] consul: New leader elected: Node 15435
2016/06/23 07:52:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:22 [DEBUG] raft: Node 127.0.0.1:15436 updated peer set (2): [127.0.0.1:15436]
2016/06/23 07:52:22 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:22 [INFO] consul: member 'Node 15435' joined, marking health alive
2016/06/23 07:52:22 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15437
2016/06/23 07:52:22 [DEBUG] memberlist: TCP connection from=127.0.0.1:50348
2016/06/23 07:52:22 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:22 [INFO] serf: EventMemberJoin: Node 15435 127.0.0.1
2016/06/23 07:52:22 [INFO] consul: shutting down client
2016/06/23 07:52:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:23 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/06/23 07:52:23 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:52:23 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:52:23 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:52:23 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:52:23 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:52:23 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:52:23 [INFO] consul: member 'testco.internal' failed, marking health critical
2016/06/23 07:52:23 [INFO] consul: shutting down client
2016/06/23 07:52:23 [INFO] consul: shutting down server
2016/06/23 07:52:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:24 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:24 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_FailedMember (3.13s)
=== RUN   TestLeader_LeftMember
2016/06/23 07:52:25 [INFO] raft: Node at 127.0.0.1:15442 [Follower] entering Follower state
2016/06/23 07:52:25 [INFO] serf: EventMemberJoin: Node 15441 127.0.0.1
2016/06/23 07:52:25 [INFO] consul: adding LAN server Node 15441 (Addr: 127.0.0.1:15442) (DC: dc1)
2016/06/23 07:52:25 [INFO] serf: EventMemberJoin: Node 15441.dc1 127.0.0.1
2016/06/23 07:52:25 [INFO] consul: adding WAN server Node 15441.dc1 (Addr: 127.0.0.1:15442) (DC: dc1)
2016/06/23 07:52:25 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:25 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15443
2016/06/23 07:52:25 [DEBUG] memberlist: TCP connection from=127.0.0.1:37414
2016/06/23 07:52:25 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:25 [INFO] serf: EventMemberJoin: Node 15441 127.0.0.1
2016/06/23 07:52:25 [INFO] consul: adding server Node 15441 (Addr: 127.0.0.1:15442) (DC: dc1)
2016/06/23 07:52:25 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:25 [INFO] raft: Node at 127.0.0.1:15442 [Candidate] entering Candidate state
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:25 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:26 [DEBUG] raft: Vote granted from 127.0.0.1:15442. Tally: 1
2016/06/23 07:52:26 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:26 [INFO] raft: Node at 127.0.0.1:15442 [Leader] entering Leader state
2016/06/23 07:52:26 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:26 [INFO] consul: New leader elected: Node 15441
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [INFO] consul: New leader elected: Node 15441
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:26 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:26 [DEBUG] raft: Node 127.0.0.1:15442 updated peer set (2): [127.0.0.1:15442]
2016/06/23 07:52:26 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:26 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/06/23 07:52:27 [INFO] consul: member 'Node 15441' joined, marking health alive
2016/06/23 07:52:27 [INFO] consul: client starting leave
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [DEBUG] serf: messageLeaveType: testco.internal
2016/06/23 07:52:27 [INFO] serf: EventMemberLeave: testco.internal 127.0.0.1
2016/06/23 07:52:27 [INFO] consul: member 'testco.internal' left, deregistering
2016/06/23 07:52:27 [INFO] consul: shutting down client
2016/06/23 07:52:27 [INFO] consul: shutting down client
2016/06/23 07:52:27 [INFO] consul: shutting down server
2016/06/23 07:52:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:27 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:28 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_LeftMember (3.75s)
=== RUN   TestLeader_ReapMember
2016/06/23 07:52:28 [INFO] raft: Node at 127.0.0.1:15448 [Follower] entering Follower state
2016/06/23 07:52:28 [INFO] serf: EventMemberJoin: Node 15447 127.0.0.1
2016/06/23 07:52:28 [INFO] consul: adding LAN server Node 15447 (Addr: 127.0.0.1:15448) (DC: dc1)
2016/06/23 07:52:28 [INFO] serf: EventMemberJoin: Node 15447.dc1 127.0.0.1
2016/06/23 07:52:28 [INFO] consul: adding WAN server Node 15447.dc1 (Addr: 127.0.0.1:15448) (DC: dc1)
2016/06/23 07:52:28 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:28 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15449
2016/06/23 07:52:28 [DEBUG] memberlist: TCP connection from=127.0.0.1:58092
2016/06/23 07:52:28 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:28 [INFO] serf: EventMemberJoin: Node 15447 127.0.0.1
2016/06/23 07:52:28 [INFO] consul: adding server Node 15447 (Addr: 127.0.0.1:15448) (DC: dc1)
2016/06/23 07:52:28 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:28 [INFO] raft: Node at 127.0.0.1:15448 [Candidate] entering Candidate state
2016/06/23 07:52:28 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:28 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] memberlist: Potential blocking operation. Last command took 19.241589ms
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:29 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:29 [DEBUG] raft: Vote granted from 127.0.0.1:15448. Tally: 1
2016/06/23 07:52:29 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:29 [INFO] raft: Node at 127.0.0.1:15448 [Leader] entering Leader state
2016/06/23 07:52:29 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:29 [INFO] consul: New leader elected: Node 15447
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [INFO] consul: New leader elected: Node 15447
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:29 [DEBUG] raft: Node 127.0.0.1:15448 updated peer set (2): [127.0.0.1:15448]
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:29 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:29 [INFO] consul: member 'Node 15447' joined, marking health alive
2016/06/23 07:52:29 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/06/23 07:52:30 [INFO] consul: member 'testco.internal' reaped, deregistering
2016/06/23 07:52:30 [INFO] consul: shutting down client
2016/06/23 07:52:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:30 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:52:30 [INFO] consul: shutting down server
2016/06/23 07:52:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:30 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:52:30 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:31 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_ReapMember (3.02s)
=== RUN   TestLeader_Reconcile_ReapMember
2016/06/23 07:52:31 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:52:31 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:52:32 [INFO] raft: Node at 127.0.0.1:15454 [Follower] entering Follower state
2016/06/23 07:52:32 [INFO] serf: EventMemberJoin: Node 15453 127.0.0.1
2016/06/23 07:52:32 [INFO] consul: adding LAN server Node 15453 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/06/23 07:52:32 [INFO] serf: EventMemberJoin: Node 15453.dc1 127.0.0.1
2016/06/23 07:52:32 [INFO] consul: adding WAN server Node 15453.dc1 (Addr: 127.0.0.1:15454) (DC: dc1)
2016/06/23 07:52:32 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:32 [INFO] raft: Node at 127.0.0.1:15454 [Candidate] entering Candidate state
2016/06/23 07:52:32 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:32 [DEBUG] raft: Vote granted from 127.0.0.1:15454. Tally: 1
2016/06/23 07:52:32 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:32 [INFO] raft: Node at 127.0.0.1:15454 [Leader] entering Leader state
2016/06/23 07:52:32 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:32 [INFO] consul: New leader elected: Node 15453
2016/06/23 07:52:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:32 [DEBUG] raft: Node 127.0.0.1:15454 updated peer set (2): [127.0.0.1:15454]
2016/06/23 07:52:32 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:32 [INFO] consul: member 'Node 15453' joined, marking health alive
2016/06/23 07:52:33 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/06/23 07:52:33 [INFO] consul: member 'no-longer-around' reaped, deregistering
2016/06/23 07:52:34 [INFO] consul: shutting down server
2016/06/23 07:52:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:34 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:34 [ERR] consul.catalog: Deregister failed: leadership lost while committing log
2016/06/23 07:52:34 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestLeader_Reconcile_ReapMember (3.26s)
=== RUN   TestLeader_Reconcile
2016/06/23 07:52:35 [INFO] raft: Node at 127.0.0.1:15458 [Follower] entering Follower state
2016/06/23 07:52:35 [INFO] serf: EventMemberJoin: Node 15457 127.0.0.1
2016/06/23 07:52:35 [INFO] consul: adding LAN server Node 15457 (Addr: 127.0.0.1:15458) (DC: dc1)
2016/06/23 07:52:35 [INFO] serf: EventMemberJoin: Node 15457.dc1 127.0.0.1
2016/06/23 07:52:35 [INFO] consul: adding WAN server Node 15457.dc1 (Addr: 127.0.0.1:15458) (DC: dc1)
2016/06/23 07:52:35 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:35 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15459
2016/06/23 07:52:35 [DEBUG] memberlist: TCP connection from=127.0.0.1:38068
2016/06/23 07:52:35 [INFO] serf: EventMemberJoin: Node 15457 127.0.0.1
2016/06/23 07:52:35 [INFO] consul: adding server Node 15457 (Addr: 127.0.0.1:15458) (DC: dc1)
2016/06/23 07:52:35 [INFO] serf: EventMemberJoin: testco.internal 127.0.0.1
2016/06/23 07:52:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:35 [INFO] raft: Node at 127.0.0.1:15458 [Candidate] entering Candidate state
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:35 [DEBUG] serf: messageJoinType: testco.internal
2016/06/23 07:52:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:36 [DEBUG] raft: Vote granted from 127.0.0.1:15458. Tally: 1
2016/06/23 07:52:36 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:36 [INFO] raft: Node at 127.0.0.1:15458 [Leader] entering Leader state
2016/06/23 07:52:36 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:36 [INFO] consul: New leader elected: Node 15457
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [INFO] consul: New leader elected: Node 15457
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:36 [DEBUG] raft: Node 127.0.0.1:15458 updated peer set (2): [127.0.0.1:15458]
2016/06/23 07:52:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:36 [INFO] consul: member 'Node 15457' joined, marking health alive
2016/06/23 07:52:36 [INFO] consul: member 'testco.internal' joined, marking health alive
2016/06/23 07:52:36 [INFO] consul: shutting down client
2016/06/23 07:52:36 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:37 [INFO] consul: shutting down server
2016/06/23 07:52:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:37 [DEBUG] memberlist: Failed UDP ping: testco.internal (timeout reached)
2016/06/23 07:52:37 [INFO] memberlist: Suspect testco.internal has failed, no acks received
2016/06/23 07:52:37 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:37 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_Reconcile (3.07s)
=== RUN   TestLeader_LeftServer
2016/06/23 07:52:37 [INFO] memberlist: Marking testco.internal as failed, suspect timeout reached
2016/06/23 07:52:37 [INFO] serf: EventMemberFailed: testco.internal 127.0.0.1
2016/06/23 07:52:38 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/06/23 07:52:38 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/06/23 07:52:38 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/06/23 07:52:38 [INFO] serf: EventMemberJoin: Node 15463.dc1 127.0.0.1
2016/06/23 07:52:38 [INFO] consul: adding WAN server Node 15463.dc1 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/06/23 07:52:38 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:38 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/06/23 07:52:39 [INFO] raft: Node at 127.0.0.1:15468 [Follower] entering Follower state
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15467.dc1 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding WAN server Node 15467.dc1 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/06/23 07:52:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:52:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:39 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:39 [INFO] raft: Node at 127.0.0.1:15464 [Leader] entering Leader state
2016/06/23 07:52:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:39 [INFO] consul: New leader elected: Node 15463
2016/06/23 07:52:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:39 [DEBUG] raft: Node 127.0.0.1:15464 updated peer set (2): [127.0.0.1:15464]
2016/06/23 07:52:39 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:39 [INFO] consul: member 'Node 15463' joined, marking health alive
2016/06/23 07:52:39 [INFO] raft: Node at 127.0.0.1:15472 [Follower] entering Follower state
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15471.dc1 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding WAN server Node 15471.dc1 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15465
2016/06/23 07:52:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:53162
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/06/23 07:52:39 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15465
2016/06/23 07:52:39 [DEBUG] memberlist: TCP connection from=127.0.0.1:53164
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15467 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15463 127.0.0.1
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15463 (Addr: 127.0.0.1:15464) (DC: dc1)
2016/06/23 07:52:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:39 [INFO] serf: EventMemberJoin: Node 15471 127.0.0.1
2016/06/23 07:52:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:39 [INFO] consul: adding LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15471
2016/06/23 07:52:39 [DEBUG] serf: messageJoinType: Node 15467
2016/06/23 07:52:39 [DEBUG] raft: Node 127.0.0.1:15464 updated peer set (2): [127.0.0.1:15468 127.0.0.1:15464]
2016/06/23 07:52:39 [INFO] raft: Added peer 127.0.0.1:15468, starting replication
2016/06/23 07:52:39 [DEBUG] raft-net: 127.0.0.1:15468 accepted connection from: 127.0.0.1:55046
2016/06/23 07:52:39 [DEBUG] raft-net: 127.0.0.1:15468 accepted connection from: 127.0.0.1:55048
2016/06/23 07:52:39 [DEBUG] memberlist: Potential blocking operation. Last command took 10.203979ms
2016/06/23 07:52:40 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/06/23 07:52:40 [DEBUG] raft: Failed to contact 127.0.0.1:15468 in 209.624083ms
2016/06/23 07:52:40 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:15464 [Follower] entering Follower state
2016/06/23 07:52:40 [INFO] consul: cluster leadership lost
2016/06/23 07:52:40 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:52:40 [ERR] consul: failed to reconcile member: {Node 15467 127.0.0.1 15469 map[vsn_min:1 vsn_max:3 build: port:15468 role:consul dc:dc1 vsn:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:52:40 [ERR] consul: failed to add raft peer: node is not the leader
2016/06/23 07:52:40 [WARN] raft: AppendEntries to 127.0.0.1:15468 rejected, sending older logs (next: 1)
2016/06/23 07:52:40 [ERR] consul: failed to reconcile member: {Node 15471 127.0.0.1 15473 map[vsn_max:3 build: port:15472 role:consul dc:dc1 vsn:2 vsn_min:1] alive 1 3 2 2 4 4}: node is not the leader
2016/06/23 07:52:40 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:52:40 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:40 [DEBUG] raft-net: 127.0.0.1:15468 accepted connection from: 127.0.0.1:55050
2016/06/23 07:52:40 [DEBUG] raft: Node 127.0.0.1:15468 updated peer set (2): [127.0.0.1:15464]
2016/06/23 07:52:40 [INFO] raft: pipelining replication to peer 127.0.0.1:15468
2016/06/23 07:52:40 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15468
2016/06/23 07:52:40 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:40 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:40 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:40 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:40 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:40 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:41 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:41 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:41 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:41 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:41 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:41 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:42 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:42 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:42 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:42 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:42 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:42 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:42 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:42 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:43 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:43 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:43 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:43 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:43 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:43 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:43 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:43 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:43 [DEBUG] memberlist: Potential blocking operation. Last command took 10.673327ms
2016/06/23 07:52:44 [DEBUG] memberlist: Potential blocking operation. Last command took 14.059098ms
2016/06/23 07:52:44 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:44 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:44 [DEBUG] raft-net: 127.0.0.1:15464 accepted connection from: 127.0.0.1:58296
2016/06/23 07:52:44 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:44 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:52:44 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:44 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:44 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:45 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:45 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:52:45 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:45 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:45 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:45 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:45 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:52:45 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:45 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:45 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:52:45 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:45 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:45 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:45 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:45 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:46 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:46 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:46 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:52:46 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:46 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:46 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:46 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:46 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:52:46 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:46 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:47 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:47 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:52:47 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:47 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:47 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:47 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:47 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:52:47 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:47 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:47 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:47 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:47 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:52:47 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:47 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:47 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:48 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:48 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:52:48 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:48 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:48 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:48 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:48 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:52:48 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:48 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:48 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:48 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:48 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:52:48 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:48 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:48 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:49 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:49 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:49 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:52:49 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:49 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:49 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:49 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:52:49 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:49 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:49 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:49 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:49 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:52:49 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:49 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:49 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:50 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:50 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:52:50 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:50 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:50 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:50 [INFO] consul: shutting down server
2016/06/23 07:52:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:50 [DEBUG] memberlist: Failed UDP ping: Node 15471 (timeout reached)
2016/06/23 07:52:50 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:50 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:50 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:52:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:50 [INFO] memberlist: Suspect Node 15471 has failed, no acks received
2016/06/23 07:52:50 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:50 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:50 [DEBUG] memberlist: Failed UDP ping: Node 15471 (timeout reached)
2016/06/23 07:52:50 [INFO] memberlist: Suspect Node 15471 has failed, no acks received
2016/06/23 07:52:50 [DEBUG] memberlist: Failed UDP ping: Node 15471 (timeout reached)
2016/06/23 07:52:50 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:50 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:52:50 [DEBUG] raft: Vote granted from 127.0.0.1:15468. Tally: 1
2016/06/23 07:52:50 [INFO] memberlist: Suspect Node 15471 has failed, no acks received
2016/06/23 07:52:50 [INFO] memberlist: Marking Node 15471 as failed, suspect timeout reached
2016/06/23 07:52:50 [INFO] serf: EventMemberFailed: Node 15471 127.0.0.1
2016/06/23 07:52:50 [INFO] consul: removing LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:50 [INFO] memberlist: Marking Node 15471 as failed, suspect timeout reached
2016/06/23 07:52:50 [INFO] serf: EventMemberFailed: Node 15471 127.0.0.1
2016/06/23 07:52:50 [INFO] consul: removing LAN server Node 15471 (Addr: 127.0.0.1:15472) (DC: dc1)
2016/06/23 07:52:50 [INFO] consul: shutting down server
2016/06/23 07:52:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:50 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:50 [INFO] raft: Node at 127.0.0.1:15468 [Candidate] entering Candidate state
2016/06/23 07:52:50 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:50 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/06/23 07:52:50 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/06/23 07:52:51 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/06/23 07:52:51 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:52:51 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15468: EOF
2016/06/23 07:52:51 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/06/23 07:52:51 [INFO] memberlist: Marking Node 15467 as failed, suspect timeout reached
2016/06/23 07:52:51 [INFO] serf: EventMemberFailed: Node 15467 127.0.0.1
2016/06/23 07:52:51 [INFO] consul: removing LAN server Node 15467 (Addr: 127.0.0.1:15468) (DC: dc1)
2016/06/23 07:52:51 [DEBUG] memberlist: Failed UDP ping: Node 15467 (timeout reached)
2016/06/23 07:52:51 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:51 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:52:51 [DEBUG] raft: Vote granted from 127.0.0.1:15464. Tally: 1
2016/06/23 07:52:51 [INFO] memberlist: Suspect Node 15467 has failed, no acks received
2016/06/23 07:52:51 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:51 [INFO] raft: Node at 127.0.0.1:15464 [Candidate] entering Candidate state
2016/06/23 07:52:51 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:51 [INFO] consul: shutting down server
2016/06/23 07:52:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:51 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:52:51 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15468: EOF
2016/06/23 07:52:51 [WARN] serf: Shutdown without a Leave
2016/06/23 07:52:52 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftServer (14.76s)
	leader_test.go:347: should have 3 peers
=== RUN   TestLeader_LeftLeader
2016/06/23 07:52:52 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/06/23 07:52:52 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/06/23 07:52:52 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/06/23 07:52:52 [INFO] serf: EventMemberJoin: Node 15475.dc1 127.0.0.1
2016/06/23 07:52:52 [INFO] consul: adding WAN server Node 15475.dc1 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/06/23 07:52:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:52 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:53 [INFO] raft: Node at 127.0.0.1:15480 [Follower] entering Follower state
2016/06/23 07:52:53 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/06/23 07:52:53 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/06/23 07:52:53 [INFO] serf: EventMemberJoin: Node 15479.dc1 127.0.0.1
2016/06/23 07:52:53 [INFO] consul: adding WAN server Node 15479.dc1 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/06/23 07:52:53 [DEBUG] raft: Votes needed: 1
2016/06/23 07:52:53 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:53 [INFO] raft: Election won. Tally: 1
2016/06/23 07:52:53 [INFO] raft: Node at 127.0.0.1:15476 [Leader] entering Leader state
2016/06/23 07:52:53 [INFO] consul: cluster leadership acquired
2016/06/23 07:52:53 [INFO] consul: New leader elected: Node 15475
2016/06/23 07:52:53 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:52:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:52:54 [DEBUG] raft: Node 127.0.0.1:15476 updated peer set (2): [127.0.0.1:15476]
2016/06/23 07:52:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:52:54 [INFO] consul: member 'Node 15475' joined, marking health alive
2016/06/23 07:52:54 [INFO] raft: Node at 127.0.0.1:15484 [Follower] entering Follower state
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15483.dc1 127.0.0.1
2016/06/23 07:52:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15477
2016/06/23 07:52:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:42738
2016/06/23 07:52:54 [INFO] consul: adding WAN server Node 15483.dc1 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/06/23 07:52:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15477
2016/06/23 07:52:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:42740
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15479 127.0.0.1
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15475 127.0.0.1
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15475 (Addr: 127.0.0.1:15476) (DC: dc1)
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:54 [INFO] serf: EventMemberJoin: Node 15483 127.0.0.1
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [INFO] consul: adding LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/06/23 07:52:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:52:54 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15479
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] serf: messageJoinType: Node 15483
2016/06/23 07:52:54 [DEBUG] raft: Node 127.0.0.1:15476 updated peer set (2): [127.0.0.1:15480 127.0.0.1:15476]
2016/06/23 07:52:54 [INFO] raft: Added peer 127.0.0.1:15480, starting replication
2016/06/23 07:52:54 [DEBUG] raft-net: 127.0.0.1:15480 accepted connection from: 127.0.0.1:46814
2016/06/23 07:52:54 [DEBUG] raft-net: 127.0.0.1:15480 accepted connection from: 127.0.0.1:46816
2016/06/23 07:52:54 [DEBUG] raft: Failed to contact 127.0.0.1:15480 in 176.113434ms
2016/06/23 07:52:54 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:52:54 [INFO] raft: Node at 127.0.0.1:15476 [Follower] entering Follower state
2016/06/23 07:52:54 [INFO] consul: cluster leadership lost
2016/06/23 07:52:54 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:52:54 [ERR] consul: failed to reconcile member: {Node 15479 127.0.0.1 15481 map[dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15480 role:consul] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:52:54 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:52:54 [ERR] consul: failed to wait for barrier: node is not the leader
2016/06/23 07:52:54 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/06/23 07:52:54 [WARN] raft: AppendEntries to 127.0.0.1:15480 rejected, sending older logs (next: 1)
2016/06/23 07:52:54 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:54 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:55 [DEBUG] raft-net: 127.0.0.1:15480 accepted connection from: 127.0.0.1:46818
2016/06/23 07:52:55 [DEBUG] memberlist: Potential blocking operation. Last command took 13.100404ms
2016/06/23 07:52:55 [DEBUG] raft: Node 127.0.0.1:15480 updated peer set (2): [127.0.0.1:15476]
2016/06/23 07:52:55 [INFO] raft: pipelining replication to peer 127.0.0.1:15480
2016/06/23 07:52:55 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15480
2016/06/23 07:52:55 [DEBUG] raft-net: 127.0.0.1:15480 accepted connection from: 127.0.0.1:46820
2016/06/23 07:52:55 [DEBUG] memberlist: Potential blocking operation. Last command took 10.092978ms
2016/06/23 07:52:55 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:55 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:55 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:55 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:55 [DEBUG] raft-net: 127.0.0.1:15480 accepted connection from: 127.0.0.1:46824
2016/06/23 07:52:55 [ERR] raft: peer 127.0.0.1:15480 has newer term, stopping replication
2016/06/23 07:52:55 [DEBUG] memberlist: Potential blocking operation. Last command took 16.126831ms
2016/06/23 07:52:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:56 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:56 [DEBUG] memberlist: Potential blocking operation. Last command took 10.14498ms
2016/06/23 07:52:56 [DEBUG] memberlist: Potential blocking operation. Last command took 10.084978ms
2016/06/23 07:52:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:56 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:56 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:56 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:56 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:56 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:56 [DEBUG] memberlist: Potential blocking operation. Last command took 13.735423ms
2016/06/23 07:52:56 [DEBUG] memberlist: Potential blocking operation. Last command took 13.789425ms
2016/06/23 07:52:57 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:57 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:57 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:57 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:57 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:52:57 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:52:57 [DEBUG] raft-net: 127.0.0.1:15476 accepted connection from: 127.0.0.1:52900
2016/06/23 07:52:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:58 [INFO] raft: Duplicate RequestVote for same term: 6
2016/06/23 07:52:58 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:58 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:58 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:58 [INFO] raft: Duplicate RequestVote for same term: 6
2016/06/23 07:52:58 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:52:58 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:58 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:52:58 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:58 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:58 [INFO] raft: Duplicate RequestVote for same term: 7
2016/06/23 07:52:59 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:59 [INFO] raft: Duplicate RequestVote for same term: 7
2016/06/23 07:52:59 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:52:59 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:59 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:59 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:59 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:52:59 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:59 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:52:59 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:52:59 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:59 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:52:59 [DEBUG] raft: Votes needed: 2
2016/06/23 07:52:59 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:52:59 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:52:59 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:52:59 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:00 [DEBUG] memberlist: Potential blocking operation. Last command took 15.605815ms
2016/06/23 07:53:00 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:00 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:53:00 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:00 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:00 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:00 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:00 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:53:00 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:00 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:00 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:00 [DEBUG] memberlist: Potential blocking operation. Last command took 10.066644ms
2016/06/23 07:53:00 [DEBUG] memberlist: Potential blocking operation. Last command took 11.472688ms
2016/06/23 07:53:00 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:00 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:53:00 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:00 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:00 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:00 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:00 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:53:00 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:01 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:01 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:01 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:01 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:01 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:53:01 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:01 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:01 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:01 [INFO] raft: Duplicate RequestVote for same term: 11
2016/06/23 07:53:01 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:01 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:01 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:02 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:02 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:02 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:53:02 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:02 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:02 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:02 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:02 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:53:02 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:02 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:02 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:02 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:02 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:53:02 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:02 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:02 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:02 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:53:02 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:02 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:02 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:03 [DEBUG] memberlist: Potential blocking operation. Last command took 47.396796ms
2016/06/23 07:53:03 [DEBUG] memberlist: Failed UDP ping: Node 15475 (timeout reached)
2016/06/23 07:53:03 [DEBUG] memberlist: Potential blocking operation. Last command took 25.236779ms
2016/06/23 07:53:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:42756
2016/06/23 07:53:03 [DEBUG] memberlist: Potential blocking operation. Last command took 37.300484ms
2016/06/23 07:53:03 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:03 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:03 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:53:03 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:03 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:03 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:03 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:03 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:53:03 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:03 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:04 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:04 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:04 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:53:04 [DEBUG] memberlist: Potential blocking operation. Last command took 10.259983ms
2016/06/23 07:53:04 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:04 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:04 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:04 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:53:04 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:04 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:04 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:04 [DEBUG] memberlist: Potential blocking operation. Last command took 18.297231ms
2016/06/23 07:53:04 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:04 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:53:04 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:04 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:04 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:04 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:04 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:53:04 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:05 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:05 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:05 [DEBUG] memberlist: Potential blocking operation. Last command took 16.699848ms
2016/06/23 07:53:05 [DEBUG] memberlist: Potential blocking operation. Last command took 10.268983ms
2016/06/23 07:53:05 [INFO] consul: shutting down server
2016/06/23 07:53:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:05 [DEBUG] memberlist: Failed UDP ping: Node 15483 (timeout reached)
2016/06/23 07:53:05 [INFO] memberlist: Suspect Node 15483 has failed, no acks received
2016/06/23 07:53:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:05 [DEBUG] memberlist: Failed UDP ping: Node 15483 (timeout reached)
2016/06/23 07:53:05 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:05 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:05 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:53:05 [INFO] consul: shutting down server
2016/06/23 07:53:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:05 [INFO] memberlist: Suspect Node 15483 has failed, no acks received
2016/06/23 07:53:05 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/06/23 07:53:05 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:05 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:05 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:05 [INFO] raft: Duplicate RequestVote for same term: 17
2016/06/23 07:53:05 [DEBUG] raft: Vote granted from 127.0.0.1:15480. Tally: 1
2016/06/23 07:53:05 [INFO] memberlist: Marking Node 15483 as failed, suspect timeout reached
2016/06/23 07:53:05 [INFO] serf: EventMemberFailed: Node 15483 127.0.0.1
2016/06/23 07:53:05 [INFO] consul: removing LAN server Node 15483 (Addr: 127.0.0.1:15484) (DC: dc1)
2016/06/23 07:53:05 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/06/23 07:53:05 [INFO] memberlist: Marking Node 15483 as failed, suspect timeout reached
2016/06/23 07:53:05 [INFO] serf: EventMemberFailed: Node 15483 127.0.0.1
2016/06/23 07:53:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:05 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:05 [INFO] raft: Node at 127.0.0.1:15480 [Candidate] entering Candidate state
2016/06/23 07:53:05 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/06/23 07:53:05 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/06/23 07:53:05 [INFO] memberlist: Marking Node 15479 as failed, suspect timeout reached
2016/06/23 07:53:05 [INFO] serf: EventMemberFailed: Node 15479 127.0.0.1
2016/06/23 07:53:05 [INFO] consul: removing LAN server Node 15479 (Addr: 127.0.0.1:15480) (DC: dc1)
2016/06/23 07:53:05 [DEBUG] memberlist: Failed UDP ping: Node 15479 (timeout reached)
2016/06/23 07:53:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:53:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15480: EOF
2016/06/23 07:53:05 [INFO] memberlist: Suspect Node 15479 has failed, no acks received
2016/06/23 07:53:06 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:06 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:06 [INFO] raft: Duplicate RequestVote for same term: 18
2016/06/23 07:53:06 [DEBUG] raft: Vote granted from 127.0.0.1:15476. Tally: 1
2016/06/23 07:53:06 [INFO] consul: shutting down server
2016/06/23 07:53:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:06 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:06 [INFO] raft: Node at 127.0.0.1:15476 [Candidate] entering Candidate state
2016/06/23 07:53:06 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:06 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:53:06 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15480: EOF
2016/06/23 07:53:07 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_LeftLeader (15.01s)
	leader_test.go:400: should have 3 peers
=== RUN   TestLeader_MultiBootstrap
2016/06/23 07:53:07 [INFO] raft: Node at 127.0.0.1:15488 [Follower] entering Follower state
2016/06/23 07:53:07 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/06/23 07:53:07 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/06/23 07:53:07 [INFO] serf: EventMemberJoin: Node 15487.dc1 127.0.0.1
2016/06/23 07:53:07 [INFO] consul: adding WAN server Node 15487.dc1 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/06/23 07:53:07 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:07 [INFO] raft: Node at 127.0.0.1:15488 [Candidate] entering Candidate state
2016/06/23 07:53:08 [INFO] raft: Node at 127.0.0.1:15492 [Follower] entering Follower state
2016/06/23 07:53:08 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/06/23 07:53:08 [INFO] serf: EventMemberJoin: Node 15491.dc1 127.0.0.1
2016/06/23 07:53:08 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15489
2016/06/23 07:53:08 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/06/23 07:53:08 [DEBUG] memberlist: TCP connection from=127.0.0.1:41722
2016/06/23 07:53:08 [INFO] consul: adding WAN server Node 15491.dc1 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/06/23 07:53:08 [INFO] serf: EventMemberJoin: Node 15491 127.0.0.1
2016/06/23 07:53:08 [INFO] consul: adding LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/06/23 07:53:08 [INFO] serf: EventMemberJoin: Node 15487 127.0.0.1
2016/06/23 07:53:08 [INFO] consul: adding LAN server Node 15487 (Addr: 127.0.0.1:15488) (DC: dc1)
2016/06/23 07:53:08 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:08 [DEBUG] raft: Vote granted from 127.0.0.1:15488. Tally: 1
2016/06/23 07:53:08 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:08 [INFO] raft: Node at 127.0.0.1:15488 [Leader] entering Leader state
2016/06/23 07:53:08 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:08 [INFO] consul: New leader elected: Node 15487
2016/06/23 07:53:08 [INFO] consul: shutting down server
2016/06/23 07:53:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:08 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:08 [INFO] raft: Node at 127.0.0.1:15492 [Candidate] entering Candidate state
2016/06/23 07:53:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:08 [DEBUG] memberlist: Failed UDP ping: Node 15491 (timeout reached)
2016/06/23 07:53:08 [INFO] memberlist: Suspect Node 15491 has failed, no acks received
2016/06/23 07:53:08 [DEBUG] raft: Node 127.0.0.1:15488 updated peer set (2): [127.0.0.1:15488]
2016/06/23 07:53:08 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:08 [INFO] consul: member 'Node 15487' joined, marking health alive
2016/06/23 07:53:08 [DEBUG] memberlist: Failed UDP ping: Node 15491 (timeout reached)
2016/06/23 07:53:08 [INFO] memberlist: Suspect Node 15491 has failed, no acks received
2016/06/23 07:53:08 [INFO] memberlist: Marking Node 15491 as failed, suspect timeout reached
2016/06/23 07:53:08 [INFO] serf: EventMemberFailed: Node 15491 127.0.0.1
2016/06/23 07:53:08 [INFO] consul: removing LAN server Node 15491 (Addr: 127.0.0.1:15492) (DC: dc1)
2016/06/23 07:53:08 [ERR] consul: 'Node 15491' and 'Node 15487' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:08 [INFO] consul: member 'Node 15491' joined, marking health alive
2016/06/23 07:53:08 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:08 [INFO] consul: member 'Node 15491' failed, marking health critical
2016/06/23 07:53:08 [INFO] consul: shutting down server
2016/06/23 07:53:08 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:09 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_MultiBootstrap (2.19s)
=== RUN   TestLeader_TombstoneGC_Reset
2016/06/23 07:53:09 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/06/23 07:53:09 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/06/23 07:53:09 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/06/23 07:53:09 [INFO] serf: EventMemberJoin: Node 15495.dc1 127.0.0.1
2016/06/23 07:53:09 [INFO] consul: adding WAN server Node 15495.dc1 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/06/23 07:53:09 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:09 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:10 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:10 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:10 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:10 [INFO] raft: Node at 127.0.0.1:15496 [Leader] entering Leader state
2016/06/23 07:53:10 [INFO] raft: Node at 127.0.0.1:15500 [Follower] entering Follower state
2016/06/23 07:53:10 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:10 [INFO] consul: New leader elected: Node 15495
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15499.dc1 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding WAN server Node 15499.dc1 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/06/23 07:53:10 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:53:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:10 [DEBUG] raft: Node 127.0.0.1:15496 updated peer set (2): [127.0.0.1:15496]
2016/06/23 07:53:10 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:10 [INFO] consul: member 'Node 15495' joined, marking health alive
2016/06/23 07:53:10 [INFO] raft: Node at 127.0.0.1:15504 [Follower] entering Follower state
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15503 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15503.dc1 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding WAN server Node 15503.dc1 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/06/23 07:53:10 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15497
2016/06/23 07:53:10 [DEBUG] memberlist: TCP connection from=127.0.0.1:41344
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/06/23 07:53:10 [DEBUG] memberlist: TCP connection from=127.0.0.1:41346
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/06/23 07:53:10 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15497
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15503 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15499 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15495 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15495 (Addr: 127.0.0.1:15496) (DC: dc1)
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [INFO] serf: EventMemberJoin: Node 15503 127.0.0.1
2016/06/23 07:53:10 [INFO] consul: adding LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:10 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] raft: Node 127.0.0.1:15496 updated peer set (2): [127.0.0.1:15500 127.0.0.1:15496]
2016/06/23 07:53:11 [INFO] raft: Added peer 127.0.0.1:15500, starting replication
2016/06/23 07:53:11 [DEBUG] raft-net: 127.0.0.1:15500 accepted connection from: 127.0.0.1:49414
2016/06/23 07:53:11 [DEBUG] raft-net: 127.0.0.1:15500 accepted connection from: 127.0.0.1:49416
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15499
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] serf: messageJoinType: Node 15503
2016/06/23 07:53:11 [DEBUG] raft: Failed to contact 127.0.0.1:15500 in 176.245438ms
2016/06/23 07:53:11 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/06/23 07:53:11 [INFO] raft: Node at 127.0.0.1:15496 [Follower] entering Follower state
2016/06/23 07:53:11 [INFO] consul: cluster leadership lost
2016/06/23 07:53:11 [ERR] consul: failed to add raft peer: leadership lost while committing log
2016/06/23 07:53:11 [ERR] consul: failed to reconcile member: {Node 15499 127.0.0.1 15501 map[role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3 build: port:15500] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:53:11 [ERR] consul: failed to reconcile: leadership lost while committing log
2016/06/23 07:53:11 [WARN] raft: Failed to get previous log: 4 log not found (last: 0)
2016/06/23 07:53:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:11 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:11 [WARN] raft: AppendEntries to 127.0.0.1:15500 rejected, sending older logs (next: 1)
2016/06/23 07:53:11 [DEBUG] raft-net: 127.0.0.1:15500 accepted connection from: 127.0.0.1:49418
2016/06/23 07:53:11 [DEBUG] raft: Node 127.0.0.1:15500 updated peer set (2): [127.0.0.1:15496]
2016/06/23 07:53:11 [INFO] raft: pipelining replication to peer 127.0.0.1:15500
2016/06/23 07:53:11 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15500
2016/06/23 07:53:11 [DEBUG] raft-net: 127.0.0.1:15500 accepted connection from: 127.0.0.1:49420
2016/06/23 07:53:11 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:11 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:11 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:11 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:11 [DEBUG] memberlist: Potential blocking operation. Last command took 11.24268ms
2016/06/23 07:53:12 [DEBUG] raft-net: 127.0.0.1:15500 accepted connection from: 127.0.0.1:49422
2016/06/23 07:53:12 [ERR] raft: peer 127.0.0.1:15500 has newer term, stopping replication
2016/06/23 07:53:12 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:12 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:12 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:12 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:13 [DEBUG] memberlist: Potential blocking operation. Last command took 13.240742ms
2016/06/23 07:53:13 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:13 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:13 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:13 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:14 [DEBUG] memberlist: Potential blocking operation. Last command took 11.957369ms
2016/06/23 07:53:14 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:14 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:14 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:14 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:14 [DEBUG] memberlist: Potential blocking operation. Last command took 15.788821ms
2016/06/23 07:53:15 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:15 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:15 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:15 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:15 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:15 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:15 [DEBUG] memberlist: Potential blocking operation. Last command took 11.709028ms
2016/06/23 07:53:15 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:15 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:15 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:15 [DEBUG] raft-net: 127.0.0.1:15496 accepted connection from: 127.0.0.1:50098
2016/06/23 07:53:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:16 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:16 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:53:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:16 [INFO] raft: Duplicate RequestVote for same term: 8
2016/06/23 07:53:16 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:16 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:16 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:16 [DEBUG] memberlist: Potential blocking operation. Last command took 11.527689ms
2016/06/23 07:53:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:16 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:16 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:53:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:16 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:16 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:16 [INFO] raft: Duplicate RequestVote for same term: 9
2016/06/23 07:53:16 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:16 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:16 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:17 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:53:17 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:17 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:17 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:17 [INFO] raft: Duplicate RequestVote for same term: 10
2016/06/23 07:53:17 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:17 [DEBUG] memberlist: Potential blocking operation. Last command took 18.635241ms
2016/06/23 07:53:17 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:17 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:17 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:17 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:17 [INFO] raft: Node at 127.0.0.1:15500 [Follower] entering Follower state
2016/06/23 07:53:17 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:17 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:18 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:18 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:53:18 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:18 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:18 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:18 [DEBUG] memberlist: Potential blocking operation. Last command took 12.43405ms
2016/06/23 07:53:18 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:18 [INFO] raft: Duplicate RequestVote for same term: 12
2016/06/23 07:53:18 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:19 [DEBUG] memberlist: Potential blocking operation. Last command took 12.19171ms
2016/06/23 07:53:19 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:19 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:19 [DEBUG] memberlist: Potential blocking operation. Last command took 16.066495ms
2016/06/23 07:53:20 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:20 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:53:20 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:20 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:20 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:20 [DEBUG] memberlist: Potential blocking operation. Last command took 22.968709ms
2016/06/23 07:53:20 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:20 [INFO] raft: Duplicate RequestVote for same term: 13
2016/06/23 07:53:20 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:20 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:20 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:21 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:21 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:53:21 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:21 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:21 [INFO] raft: Duplicate RequestVote for same term: 14
2016/06/23 07:53:21 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:21 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:21 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:21 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:21 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:22 [INFO] consul: shutting down server
2016/06/23 07:53:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:22 [DEBUG] memberlist: Failed UDP ping: Node 15503 (timeout reached)
2016/06/23 07:53:22 [INFO] memberlist: Suspect Node 15503 has failed, no acks received
2016/06/23 07:53:22 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:22 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:53:22 [DEBUG] raft: Vote granted from 127.0.0.1:15500. Tally: 1
2016/06/23 07:53:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:22 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:22 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:22 [INFO] raft: Duplicate RequestVote for same term: 15
2016/06/23 07:53:22 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:22 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:22 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:22 [INFO] raft: Node at 127.0.0.1:15500 [Candidate] entering Candidate state
2016/06/23 07:53:22 [INFO] consul: shutting down server
2016/06/23 07:53:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:22 [DEBUG] memberlist: Failed UDP ping: Node 15503 (timeout reached)
2016/06/23 07:53:22 [ERR] memberlist: Failed to send indirect ping: write udp 127.0.0.1:15501->127.0.0.1:15497: use of closed network connection
2016/06/23 07:53:22 [DEBUG] memberlist: Failed UDP ping: Node 15503 (timeout reached)
2016/06/23 07:53:22 [INFO] memberlist: Suspect Node 15503 has failed, no acks received
2016/06/23 07:53:22 [INFO] memberlist: Marking Node 15503 as failed, suspect timeout reached
2016/06/23 07:53:22 [INFO] memberlist: Suspect Node 15503 has failed, no acks received
2016/06/23 07:53:22 [INFO] serf: EventMemberFailed: Node 15503 127.0.0.1
2016/06/23 07:53:22 [INFO] consul: removing LAN server Node 15503 (Addr: 127.0.0.1:15504) (DC: dc1)
2016/06/23 07:53:22 [INFO] memberlist: Marking Node 15503 as failed, suspect timeout reached
2016/06/23 07:53:22 [INFO] serf: EventMemberFailed: Node 15503 127.0.0.1
2016/06/23 07:53:22 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/06/23 07:53:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:22 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/06/23 07:53:22 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:53:22 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15500: EOF
2016/06/23 07:53:22 [DEBUG] memberlist: Failed UDP ping: Node 15499 (timeout reached)
2016/06/23 07:53:22 [INFO] memberlist: Suspect Node 15499 has failed, no acks received
2016/06/23 07:53:22 [INFO] memberlist: Marking Node 15499 as failed, suspect timeout reached
2016/06/23 07:53:22 [INFO] serf: EventMemberFailed: Node 15499 127.0.0.1
2016/06/23 07:53:22 [INFO] consul: removing LAN server Node 15499 (Addr: 127.0.0.1:15500) (DC: dc1)
2016/06/23 07:53:22 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:22 [DEBUG] raft: Votes needed: 2
2016/06/23 07:53:22 [DEBUG] raft: Vote granted from 127.0.0.1:15496. Tally: 1
2016/06/23 07:53:22 [INFO] raft: Duplicate RequestVote for same term: 16
2016/06/23 07:53:22 [INFO] consul: shutting down server
2016/06/23 07:53:22 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:22 [WARN] raft: Election timeout reached, restarting election
2016/06/23 07:53:22 [INFO] raft: Node at 127.0.0.1:15496 [Candidate] entering Candidate state
2016/06/23 07:53:23 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:23 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/06/23 07:53:23 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15500: EOF
2016/06/23 07:53:23 [DEBUG] raft: Votes needed: 2
--- FAIL: TestLeader_TombstoneGC_Reset (14.19s)
	leader_test.go:511: should have 3 peers
=== RUN   TestLeader_ReapTombstones
2016/06/23 07:53:24 [INFO] raft: Node at 127.0.0.1:15508 [Follower] entering Follower state
2016/06/23 07:53:24 [INFO] serf: EventMemberJoin: Node 15507 127.0.0.1
2016/06/23 07:53:24 [INFO] consul: adding LAN server Node 15507 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/06/23 07:53:24 [INFO] serf: EventMemberJoin: Node 15507.dc1 127.0.0.1
2016/06/23 07:53:24 [INFO] consul: adding WAN server Node 15507.dc1 (Addr: 127.0.0.1:15508) (DC: dc1)
2016/06/23 07:53:24 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:24 [INFO] raft: Node at 127.0.0.1:15508 [Candidate] entering Candidate state
2016/06/23 07:53:24 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:24 [DEBUG] raft: Vote granted from 127.0.0.1:15508. Tally: 1
2016/06/23 07:53:24 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:24 [INFO] raft: Node at 127.0.0.1:15508 [Leader] entering Leader state
2016/06/23 07:53:24 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:24 [INFO] consul: New leader elected: Node 15507
2016/06/23 07:53:24 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:24 [DEBUG] raft: Node 127.0.0.1:15508 updated peer set (2): [127.0.0.1:15508]
2016/06/23 07:53:24 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:24 [INFO] consul: member 'Node 15507' joined, marking health alive
2016/06/23 07:53:25 [INFO] consul: shutting down server
2016/06/23 07:53:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:25 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:26 [ERR] consul: failed to wait for barrier: leadership lost while committing log
--- PASS: TestLeader_ReapTombstones (2.53s)
=== RUN   TestPreparedQuery_Apply
2016/06/23 07:53:26 [INFO] raft: Node at 127.0.0.1:15512 [Follower] entering Follower state
2016/06/23 07:53:26 [INFO] serf: EventMemberJoin: Node 15511 127.0.0.1
2016/06/23 07:53:26 [INFO] consul: adding LAN server Node 15511 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/06/23 07:53:26 [INFO] serf: EventMemberJoin: Node 15511.dc1 127.0.0.1
2016/06/23 07:53:26 [INFO] consul: adding WAN server Node 15511.dc1 (Addr: 127.0.0.1:15512) (DC: dc1)
2016/06/23 07:53:26 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:26 [INFO] raft: Node at 127.0.0.1:15512 [Candidate] entering Candidate state
2016/06/23 07:53:26 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:26 [DEBUG] raft: Vote granted from 127.0.0.1:15512. Tally: 1
2016/06/23 07:53:26 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:26 [INFO] raft: Node at 127.0.0.1:15512 [Leader] entering Leader state
2016/06/23 07:53:26 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:26 [INFO] consul: New leader elected: Node 15511
2016/06/23 07:53:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:27 [DEBUG] raft: Node 127.0.0.1:15512 updated peer set (2): [127.0.0.1:15512]
2016/06/23 07:53:27 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:27 [INFO] consul: member 'Node 15511' joined, marking health alive
2016/06/23 07:53:29 [INFO] consul: shutting down server
2016/06/23 07:53:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:29 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:29 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:53:29 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Apply (3.61s)
=== RUN   TestPreparedQuery_Apply_ACLDeny
2016/06/23 07:53:30 [INFO] raft: Node at 127.0.0.1:15516 [Follower] entering Follower state
2016/06/23 07:53:30 [INFO] serf: EventMemberJoin: Node 15515 127.0.0.1
2016/06/23 07:53:30 [INFO] consul: adding LAN server Node 15515 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/06/23 07:53:30 [INFO] serf: EventMemberJoin: Node 15515.dc1 127.0.0.1
2016/06/23 07:53:30 [INFO] consul: adding WAN server Node 15515.dc1 (Addr: 127.0.0.1:15516) (DC: dc1)
2016/06/23 07:53:30 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:30 [INFO] raft: Node at 127.0.0.1:15516 [Candidate] entering Candidate state
2016/06/23 07:53:30 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:30 [DEBUG] raft: Vote granted from 127.0.0.1:15516. Tally: 1
2016/06/23 07:53:30 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:30 [INFO] raft: Node at 127.0.0.1:15516 [Leader] entering Leader state
2016/06/23 07:53:30 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:30 [INFO] consul: New leader elected: Node 15515
2016/06/23 07:53:30 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:30 [DEBUG] raft: Node 127.0.0.1:15516 updated peer set (2): [127.0.0.1:15516]
2016/06/23 07:53:30 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:30 [INFO] consul: member 'Node 15515' joined, marking health alive
2016/06/23 07:53:31 [WARN] consul.prepared_query: Operation on prepared query '91d850c3-4c43-80c4-5f57-63896725a3ed' denied due to ACLs
2016/06/23 07:53:31 [WARN] consul.prepared_query: Operation on prepared query '886f2279-19ad-7bc1-8f9a-93184b70fa30' denied due to ACLs
2016/06/23 07:53:31 [WARN] consul.prepared_query: Operation on prepared query '886f2279-19ad-7bc1-8f9a-93184b70fa30' denied due to ACLs
2016/06/23 07:53:33 [WARN] consul.prepared_query: Operation on prepared query 'd169d98d-50c2-e385-9ef7-27f310ec1335' denied due to ACLs
2016/06/23 07:53:33 [INFO] consul: shutting down server
2016/06/23 07:53:33 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:34 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_Apply_ACLDeny (4.42s)
=== RUN   TestPreparedQuery_Apply_ForwardLeader
2016/06/23 07:53:34 [INFO] raft: Node at 127.0.0.1:15520 [Follower] entering Follower state
2016/06/23 07:53:34 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/06/23 07:53:34 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/06/23 07:53:34 [INFO] serf: EventMemberJoin: Node 15519.dc1 127.0.0.1
2016/06/23 07:53:34 [INFO] consul: adding WAN server Node 15519.dc1 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/06/23 07:53:34 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:34 [INFO] raft: Node at 127.0.0.1:15520 [Candidate] entering Candidate state
2016/06/23 07:53:35 [INFO] serf: EventMemberJoin: Node 15523 127.0.0.1
2016/06/23 07:53:35 [INFO] raft: Node at 127.0.0.1:15524 [Follower] entering Follower state
2016/06/23 07:53:35 [INFO] consul: adding LAN server Node 15523 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/06/23 07:53:35 [INFO] serf: EventMemberJoin: Node 15523.dc1 127.0.0.1
2016/06/23 07:53:35 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15521
2016/06/23 07:53:35 [DEBUG] memberlist: TCP connection from=127.0.0.1:53858
2016/06/23 07:53:35 [INFO] consul: adding WAN server Node 15523.dc1 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/06/23 07:53:35 [INFO] serf: EventMemberJoin: Node 15523 127.0.0.1
2016/06/23 07:53:35 [INFO] serf: EventMemberJoin: Node 15519 127.0.0.1
2016/06/23 07:53:35 [INFO] consul: adding LAN server Node 15523 (Addr: 127.0.0.1:15524) (DC: dc1)
2016/06/23 07:53:35 [INFO] consul: adding LAN server Node 15519 (Addr: 127.0.0.1:15520) (DC: dc1)
2016/06/23 07:53:35 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:35 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:35 [DEBUG] raft: Vote granted from 127.0.0.1:15520. Tally: 1
2016/06/23 07:53:35 [INFO] raft: Node at 127.0.0.1:15524 [Candidate] entering Candidate state
2016/06/23 07:53:35 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:35 [INFO] raft: Node at 127.0.0.1:15520 [Leader] entering Leader state
2016/06/23 07:53:35 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:35 [INFO] consul: New leader elected: Node 15519
2016/06/23 07:53:35 [DEBUG] memberlist: Potential blocking operation. Last command took 10.414655ms
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] memberlist: Potential blocking operation. Last command took 10.551325ms
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [INFO] consul: New leader elected: Node 15519
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] memberlist: Potential blocking operation. Last command took 12.368048ms
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:35 [DEBUG] raft: Node 127.0.0.1:15520 updated peer set (2): [127.0.0.1:15520]
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageJoinType: Node 15523
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:35 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:36 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/06/23 07:53:36 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:36 [DEBUG] raft: Vote granted from 127.0.0.1:15524. Tally: 1
2016/06/23 07:53:36 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:36 [INFO] raft: Node at 127.0.0.1:15524 [Leader] entering Leader state
2016/06/23 07:53:36 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:36 [INFO] consul: New leader elected: Node 15523
2016/06/23 07:53:36 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:36 [INFO] consul: member 'Node 15523' joined, marking health alive
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [INFO] consul: New leader elected: Node 15523
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:53:36 [DEBUG] raft: Node 127.0.0.1:15524 updated peer set (2): [127.0.0.1:15524]
2016/06/23 07:53:36 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:36 [INFO] consul: member 'Node 15523' joined, marking health alive
2016/06/23 07:53:36 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:36 [ERR] consul: 'Node 15519' and 'Node 15523' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:36 [INFO] consul: member 'Node 15519' joined, marking health alive
2016/06/23 07:53:37 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:37 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:37 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:37 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:38 [INFO] consul: shutting down server
2016/06/23 07:53:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:38 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:38 [ERR] consul: 'Node 15519' and 'Node 15523' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:38 [DEBUG] memberlist: Failed UDP ping: Node 15523 (timeout reached)
2016/06/23 07:53:38 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:38 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:53:38 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/06/23 07:53:38 [INFO] memberlist: Suspect Node 15523 has failed, no acks received
2016/06/23 07:53:38 [INFO] consul: shutting down server
2016/06/23 07:53:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:38 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:38 [ERR] consul: 'Node 15523' and 'Node 15519' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:53:38 [INFO] memberlist: Marking Node 15523 as failed, suspect timeout reached
2016/06/23 07:53:38 [INFO] serf: EventMemberFailed: Node 15523 127.0.0.1
--- PASS: TestPreparedQuery_Apply_ForwardLeader (4.45s)
=== RUN   TestPreparedQuery_parseQuery
--- PASS: TestPreparedQuery_parseQuery (0.00s)
=== RUN   TestPreparedQuery_ACLDeny_Catchall_Template
2016/06/23 07:53:39 [INFO] raft: Node at 127.0.0.1:15528 [Follower] entering Follower state
2016/06/23 07:53:39 [INFO] serf: EventMemberJoin: Node 15527 127.0.0.1
2016/06/23 07:53:39 [INFO] consul: adding LAN server Node 15527 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/06/23 07:53:39 [INFO] serf: EventMemberJoin: Node 15527.dc1 127.0.0.1
2016/06/23 07:53:39 [INFO] consul: adding WAN server Node 15527.dc1 (Addr: 127.0.0.1:15528) (DC: dc1)
2016/06/23 07:53:39 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:39 [INFO] raft: Node at 127.0.0.1:15528 [Candidate] entering Candidate state
2016/06/23 07:53:39 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:39 [DEBUG] raft: Vote granted from 127.0.0.1:15528. Tally: 1
2016/06/23 07:53:39 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:39 [INFO] raft: Node at 127.0.0.1:15528 [Leader] entering Leader state
2016/06/23 07:53:39 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:39 [INFO] consul: New leader elected: Node 15527
2016/06/23 07:53:39 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:39 [DEBUG] raft: Node 127.0.0.1:15528 updated peer set (2): [127.0.0.1:15528]
2016/06/23 07:53:39 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:40 [INFO] consul: member 'Node 15527' joined, marking health alive
2016/06/23 07:53:40 [WARN] consul.prepared_query: Operation on prepared query '085d3025-f2a0-f0b3-17d9-11f317b97e99' denied due to ACLs
2016/06/23 07:53:40 [DEBUG] consul: dropping prepared query "b56863cb-5922-b702-b4e3-fdb463e2f6ef" from result due to ACLs
2016/06/23 07:53:40 [WARN] consul.prepared_query: Request to get prepared query 'b56863cb-5922-b702-b4e3-fdb463e2f6ef' denied due to ACLs
2016/06/23 07:53:40 [DEBUG] consul: dropping prepared query "b56863cb-5922-b702-b4e3-fdb463e2f6ef" from result due to ACLs
2016/06/23 07:53:40 [DEBUG] consul: dropping prepared query "b56863cb-5922-b702-b4e3-fdb463e2f6ef" from result due to ACLs
2016/06/23 07:53:40 [WARN] consul.prepared_query: Explain on prepared query 'b56863cb-5922-b702-b4e3-fdb463e2f6ef' denied due to ACLs
2016/06/23 07:53:40 [INFO] consul: shutting down server
2016/06/23 07:53:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:40 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:41 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:53:41 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_ACLDeny_Catchall_Template (2.55s)
=== RUN   TestPreparedQuery_Get
2016/06/23 07:53:41 [INFO] raft: Node at 127.0.0.1:15532 [Follower] entering Follower state
2016/06/23 07:53:41 [INFO] serf: EventMemberJoin: Node 15531 127.0.0.1
2016/06/23 07:53:41 [INFO] consul: adding LAN server Node 15531 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/06/23 07:53:41 [INFO] serf: EventMemberJoin: Node 15531.dc1 127.0.0.1
2016/06/23 07:53:41 [INFO] consul: adding WAN server Node 15531.dc1 (Addr: 127.0.0.1:15532) (DC: dc1)
2016/06/23 07:53:41 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:41 [INFO] raft: Node at 127.0.0.1:15532 [Candidate] entering Candidate state
2016/06/23 07:53:42 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:42 [DEBUG] raft: Vote granted from 127.0.0.1:15532. Tally: 1
2016/06/23 07:53:42 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:42 [INFO] raft: Node at 127.0.0.1:15532 [Leader] entering Leader state
2016/06/23 07:53:42 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:42 [INFO] consul: New leader elected: Node 15531
2016/06/23 07:53:42 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:42 [DEBUG] raft: Node 127.0.0.1:15532 updated peer set (2): [127.0.0.1:15532]
2016/06/23 07:53:42 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:42 [INFO] consul: member 'Node 15531' joined, marking health alive
2016/06/23 07:53:44 [DEBUG] consul: dropping prepared query "8df0da12-202e-a096-ed30-555910cf47fe" from result due to ACLs
2016/06/23 07:53:44 [WARN] consul.prepared_query: Request to get prepared query '8df0da12-202e-a096-ed30-555910cf47fe' denied due to ACLs
2016/06/23 07:53:45 [INFO] consul: shutting down server
2016/06/23 07:53:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:45 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:46 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:53:46 [ERR] consul: failed to wait for barrier: raft is already shutdown
--- PASS: TestPreparedQuery_Get (5.00s)
=== RUN   TestPreparedQuery_List
2016/06/23 07:53:46 [INFO] raft: Node at 127.0.0.1:15536 [Follower] entering Follower state
2016/06/23 07:53:46 [INFO] serf: EventMemberJoin: Node 15535 127.0.0.1
2016/06/23 07:53:46 [INFO] consul: adding LAN server Node 15535 (Addr: 127.0.0.1:15536) (DC: dc1)
2016/06/23 07:53:46 [INFO] serf: EventMemberJoin: Node 15535.dc1 127.0.0.1
2016/06/23 07:53:46 [INFO] consul: adding WAN server Node 15535.dc1 (Addr: 127.0.0.1:15536) (DC: dc1)
2016/06/23 07:53:46 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:46 [INFO] raft: Node at 127.0.0.1:15536 [Candidate] entering Candidate state
2016/06/23 07:53:47 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:47 [DEBUG] raft: Vote granted from 127.0.0.1:15536. Tally: 1
2016/06/23 07:53:47 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:47 [INFO] raft: Node at 127.0.0.1:15536 [Leader] entering Leader state
2016/06/23 07:53:47 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:47 [INFO] consul: New leader elected: Node 15535
2016/06/23 07:53:47 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:47 [DEBUG] raft: Node 127.0.0.1:15536 updated peer set (2): [127.0.0.1:15536]
2016/06/23 07:53:47 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:47 [INFO] consul: member 'Node 15535' joined, marking health alive
2016/06/23 07:53:48 [DEBUG] consul: dropping prepared query "4692e83f-3396-833f-e13d-e2f2f21caf1d" from result due to ACLs
2016/06/23 07:53:48 [DEBUG] consul: dropping prepared query "4692e83f-3396-833f-e13d-e2f2f21caf1d" from result due to ACLs
2016/06/23 07:53:48 [INFO] consul: shutting down server
2016/06/23 07:53:48 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:49 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_List (3.38s)
=== RUN   TestPreparedQuery_Explain
2016/06/23 07:53:50 [INFO] raft: Node at 127.0.0.1:15540 [Follower] entering Follower state
2016/06/23 07:53:50 [INFO] serf: EventMemberJoin: Node 15539 127.0.0.1
2016/06/23 07:53:50 [INFO] consul: adding LAN server Node 15539 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/06/23 07:53:50 [INFO] serf: EventMemberJoin: Node 15539.dc1 127.0.0.1
2016/06/23 07:53:50 [INFO] consul: adding WAN server Node 15539.dc1 (Addr: 127.0.0.1:15540) (DC: dc1)
2016/06/23 07:53:50 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:50 [INFO] raft: Node at 127.0.0.1:15540 [Candidate] entering Candidate state
2016/06/23 07:53:50 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:50 [DEBUG] raft: Vote granted from 127.0.0.1:15540. Tally: 1
2016/06/23 07:53:50 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:50 [INFO] raft: Node at 127.0.0.1:15540 [Leader] entering Leader state
2016/06/23 07:53:50 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:50 [INFO] consul: New leader elected: Node 15539
2016/06/23 07:53:50 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:50 [DEBUG] raft: Node 127.0.0.1:15540 updated peer set (2): [127.0.0.1:15540]
2016/06/23 07:53:51 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:51 [INFO] consul: member 'Node 15539' joined, marking health alive
2016/06/23 07:53:52 [DEBUG] consul: dropping prepared query "8d1e6f04-108c-7444-e9d3-a64d47c69f90" from result due to ACLs
2016/06/23 07:53:52 [WARN] consul.prepared_query: Explain on prepared query '8d1e6f04-108c-7444-e9d3-a64d47c69f90' denied due to ACLs
2016/06/23 07:53:52 [INFO] consul: shutting down server
2016/06/23 07:53:52 [WARN] serf: Shutdown without a Leave
2016/06/23 07:53:52 [WARN] serf: Shutdown without a Leave
--- PASS: TestPreparedQuery_Explain (2.82s)
=== RUN   TestPreparedQuery_Execute
2016/06/23 07:53:52 [INFO] raft: Node at 127.0.0.1:15544 [Follower] entering Follower state
2016/06/23 07:53:52 [INFO] serf: EventMemberJoin: Node 15543 127.0.0.1
2016/06/23 07:53:52 [INFO] consul: adding LAN server Node 15543 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/06/23 07:53:52 [INFO] serf: EventMemberJoin: Node 15543.dc1 127.0.0.1
2016/06/23 07:53:52 [INFO] consul: adding WAN server Node 15543.dc1 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/06/23 07:53:52 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:52 [INFO] raft: Node at 127.0.0.1:15544 [Candidate] entering Candidate state
2016/06/23 07:53:53 [INFO] raft: Node at 127.0.0.1:15548 [Follower] entering Follower state
2016/06/23 07:53:53 [INFO] serf: EventMemberJoin: Node 15547 127.0.0.1
2016/06/23 07:53:53 [INFO] consul: adding LAN server Node 15547 (Addr: 127.0.0.1:15548) (DC: dc2)
2016/06/23 07:53:53 [INFO] serf: EventMemberJoin: Node 15547.dc2 127.0.0.1
2016/06/23 07:53:53 [INFO] consul: adding WAN server Node 15547.dc2 (Addr: 127.0.0.1:15548) (DC: dc2)
2016/06/23 07:53:53 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:53 [DEBUG] raft: Vote granted from 127.0.0.1:15544. Tally: 1
2016/06/23 07:53:53 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:53 [INFO] raft: Node at 127.0.0.1:15544 [Leader] entering Leader state
2016/06/23 07:53:53 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:53 [INFO] consul: New leader elected: Node 15543
2016/06/23 07:53:53 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:53:53 [INFO] raft: Node at 127.0.0.1:15548 [Candidate] entering Candidate state
2016/06/23 07:53:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:53 [DEBUG] raft: Node 127.0.0.1:15544 updated peer set (2): [127.0.0.1:15544]
2016/06/23 07:53:53 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:54 [DEBUG] raft: Votes needed: 1
2016/06/23 07:53:54 [DEBUG] raft: Vote granted from 127.0.0.1:15548. Tally: 1
2016/06/23 07:53:54 [INFO] raft: Election won. Tally: 1
2016/06/23 07:53:54 [INFO] raft: Node at 127.0.0.1:15548 [Leader] entering Leader state
2016/06/23 07:53:54 [INFO] consul: cluster leadership acquired
2016/06/23 07:53:54 [INFO] consul: New leader elected: Node 15547
2016/06/23 07:53:54 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:53:54 [DEBUG] raft: Node 127.0.0.1:15548 updated peer set (2): [127.0.0.1:15548]
2016/06/23 07:53:54 [INFO] consul: member 'Node 15543' joined, marking health alive
2016/06/23 07:53:54 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:53:54 [INFO] consul: member 'Node 15547' joined, marking health alive
2016/06/23 07:53:54 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15546
2016/06/23 07:53:54 [DEBUG] memberlist: TCP connection from=127.0.0.1:37072
2016/06/23 07:53:54 [INFO] serf: EventMemberJoin: Node 15543.dc1 127.0.0.1
2016/06/23 07:53:54 [INFO] consul: adding WAN server Node 15543.dc1 (Addr: 127.0.0.1:15544) (DC: dc1)
2016/06/23 07:53:54 [INFO] serf: EventMemberJoin: Node 15547.dc2 127.0.0.1
2016/06/23 07:53:54 [INFO] consul: adding WAN server Node 15547.dc2 (Addr: 127.0.0.1:15548) (DC: dc2)
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:54 [DEBUG] serf: messageJoinType: Node 15547.dc2
2016/06/23 07:53:56 [DEBUG] memberlist: Potential blocking operation. Last command took 16.539843ms
2016/06/23 07:53:59 [DEBUG] memberlist: Potential blocking operation. Last command took 24.178413ms
2016/06/23 07:54:01 [DEBUG] memberlist: Potential blocking operation. Last command took 10.649996ms
2016/06/23 07:54:02 [DEBUG] memberlist: Potential blocking operation. Last command took 12.871398ms
2016/06/23 07:54:04 [DEBUG] memberlist: Potential blocking operation. Last command took 10.631328ms
2016/06/23 07:54:05 [DEBUG] memberlist: Potential blocking operation. Last command took 14.545115ms
2016/06/23 07:54:05 [INFO] consul: shutting down server
2016/06/23 07:54:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:05 [DEBUG] memberlist: Failed UDP ping: Node 15547.dc2 (timeout reached)
2016/06/23 07:54:05 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:54:05 [ERR] consul: failed to wait for barrier: raft is already shutdown
2016/06/23 07:54:05 [INFO] consul: shutting down server
2016/06/23 07:54:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:05 [INFO] memberlist: Suspect Node 15547.dc2 has failed, no acks received
2016/06/23 07:54:05 [DEBUG] memberlist: Failed UDP ping: Node 15547.dc2 (timeout reached)
2016/06/23 07:54:05 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:05 [INFO] memberlist: Suspect Node 15547.dc2 has failed, no acks received
2016/06/23 07:54:06 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:54:06 [INFO] memberlist: Marking Node 15547.dc2 as failed, suspect timeout reached
2016/06/23 07:54:06 [INFO] serf: EventMemberFailed: Node 15547.dc2 127.0.0.1
--- FAIL: TestPreparedQuery_Execute (13.75s)
	prepared_query_endpoint_test.go:1570: bad: {foo [{0x10fe6a80 0x11003fc0 []} {0x10fe6ab0 0x11003a80 []} {0x10fe6ae0 0x11003b00 []} {0x10fe6b10 0x11003b40 []} {0x10fe6b40 0x11003b80 []} {0x10fe6b70 0x11003bc0 []} {0x10fe6ba0 0x11003c00 []} {0x10fe6bd0 0x11003c40 []} {0x10fe6c90 0x11003c80 []} {0x10fe6cc0 0x11003cc0 []}] {10s} dc1 0 {0 0 true}}
=== RUN   TestPreparedQuery_Execute_ForwardLeader
2016/06/23 07:54:06 [INFO] raft: Node at 127.0.0.1:15552 [Follower] entering Follower state
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15551 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding LAN server Node 15551 (Addr: 127.0.0.1:15552) (DC: dc1)
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15551.dc1 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding WAN server Node 15551.dc1 (Addr: 127.0.0.1:15552) (DC: dc1)
2016/06/23 07:54:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:54:06 [INFO] raft: Node at 127.0.0.1:15552 [Candidate] entering Candidate state
2016/06/23 07:54:06 [INFO] raft: Node at 127.0.0.1:15556 [Follower] entering Follower state
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15555 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding LAN server Node 15555 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15555.dc1 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding WAN server Node 15555.dc1 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/06/23 07:54:06 [DEBUG] memberlist: TCP connection from=127.0.0.1:50020
2016/06/23 07:54:06 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15553
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15555 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding LAN server Node 15555 (Addr: 127.0.0.1:15556) (DC: dc1)
2016/06/23 07:54:06 [INFO] serf: EventMemberJoin: Node 15551 127.0.0.1
2016/06/23 07:54:06 [INFO] consul: adding LAN server Node 15551 (Addr: 127.0.0.1:15552) (DC: dc1)
2016/06/23 07:54:06 [DEBUG] raft: Votes needed: 1
2016/06/23 07:54:06 [DEBUG] raft: Vote granted from 127.0.0.1:15552. Tally: 1
2016/06/23 07:54:06 [INFO] raft: Election won. Tally: 1
2016/06/23 07:54:06 [INFO] raft: Node at 127.0.0.1:15552 [Leader] entering Leader state
2016/06/23 07:54:06 [INFO] consul: cluster leadership acquired
2016/06/23 07:54:06 [INFO] consul: New leader elected: Node 15551
2016/06/23 07:54:06 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:54:06 [INFO] raft: Node at 127.0.0.1:15556 [Candidate] entering Candidate state
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [INFO] consul: New leader elected: Node 15551
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageJoinType: Node 15555
2016/06/23 07:54:07 [DEBUG] raft: Node 127.0.0.1:15552 updated peer set (2): [127.0.0.1:15552]
2016/06/23 07:54:07 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:54:07 [INFO] consul: member 'Node 15551' joined, marking health alive
2016/06/23 07:54:07 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:07 [INFO] consul: member 'Node 15555' joined, marking health alive
2016/06/23 07:54:07 [DEBUG] raft: Votes needed: 1
2016/06/23 07:54:07 [DEBUG] raft: Vote granted from 127.0.0.1:15556. Tally: 1
2016/06/23 07:54:07 [INFO] raft: Election won. Tally: 1
2016/06/23 07:54:07 [INFO] raft: Node at 127.0.0.1:15556 [Leader] entering Leader state
2016/06/23 07:54:07 [INFO] consul: cluster leadership acquired
2016/06/23 07:54:07 [INFO] consul: New leader elected: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [INFO] consul: New leader elected: Node 15555
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:07 [DEBUG] serf: messageUserEventType: consul:new-leader
2016/06/23 07:54:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/06/23 07:54:08 [DEBUG] raft: Node 127.0.0.1:15556 updated peer set (2): [127.0.0.1:15556]
2016/06/23 07:54:08 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:08 [DEBUG] consul: reset tombstone GC to index 2
2016/06/23 07:54:08 [INFO] consul: member 'Node 15555' joined, marking health alive
2016/06/23 07:54:08 [DEBUG] memberlist: Potential blocking operation. Last command took 15.720485ms
2016/06/23 07:54:08 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:08 [ERR] consul: 'Node 15551' and 'Node 15555' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:08 [INFO] consul: member 'Node 15551' joined, marking health alive
2016/06/23 07:54:08 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:08 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15551' and 'Node 15555' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15551' and 'Node 15555' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15551' and 'Node 15555' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [INFO] consul: shutting down server
2016/06/23 07:54:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:09 [DEBUG] memberlist: Failed UDP ping: Node 15555 (timeout reached)
2016/06/23 07:54:09 [INFO] memberlist: Suspect Node 15555 has failed, no acks received
2016/06/23 07:54:09 [ERR] consul: failed to wait for barrier: leadership lost while committing log
2016/06/23 07:54:09 [ERR] consul: 'Node 15555' and 'Node 15551' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/06/23 07:54:09 [INFO] consul: shutting down server
2016/06/23 07:54:09 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:10 [INFO] memberlist: Marking Node 15555 as failed, suspect timeout reached
2016/06/23 07:54:10 [INFO] serf: EventMemberFailed: Node 15555 127.0.0.1
2016/06/23 07:54:10 [WARN] serf: Shutdown without a Leave
2016/06/23 07:54:10 [INFO] consul: member 'Node 15555' failed, marking health critical
2016/06/23 07:54:10 [ERR] consul.catalog: Register failed: leadership lost while committing log
2016/06/23 07:54:10 [ERR] consul: failed to reconcile member: {Node 15555 127.0.0.1 15557 map[build: port:15556 bootstrap:1 role:consul dc:dc1 vsn:2 vsn_min:1 vsn_max:3] failed 1 3 2 2 4 4}: leadership lost while committing log
2016/06/23 07:54:10 [ERR] consul: failed to reconcile: leadership lost while committing log
--- PASS: TestPreparedQuery_Execute_ForwardLeader (4.58s)
=== RUN   TestPreparedQuery_tagFilter
--- PASS: TestPreparedQuery_tagFilter (0.00s)
=== RUN   TestPreparedQuery_Wrapper
2016/06/23 07:54:11 [INFO] raft: Node at 127.0.0.1:15560 [Follower] entering Follower state
2016/06/23 07:54:11 [INFO] serf: EventMemberJoin: Node 15559 127.0.0.1
2016/06/23 07:54:11 [INFO] consul: adding LAN server Node 15559 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/06/23 07:54:11 [INFO] serf: EventMemberJoin: Node 15559.dc1 127.0.0.1
2016/06/23 07:54:11 [INFO] consul: adding WAN server Node 15559.dc1 (Addr: 127.0.0.1:15560) (DC: dc1)
2016/06/23 07:54:11 [WARN] raft: Heartbeat timeout reached, starting election
2016/06/23 07:54:11 [INFO] raft: Node at 127.0.0.1:15560 [Candidate] entering Candidate state
SIGQUIT: quit
PC=0x7332c m=0

goroutine 0 [idle]:
runtime.futex(0xa1742c, 0x0, 0x0, 0x0, 0x0, 0x0, 0x1f6f0, 0x0, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/runtime/sys_linux_arm.s:246 +0x1c
runtime.futexsleep(0xa1742c, 0x0, 0xffffffff, 0xffffffff)
	/usr/lib/go-1.6/src/runtime/os1_linux.go:40 +0x68
runtime.notesleep(0xa1742c)
	/usr/lib/go-1.6/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
	/usr/lib/go-1.6/src/runtime/proc.go:1538 +0x100
runtime.findrunnable(0x10c1aa00, 0x0)
	/usr/lib/go-1.6/src/runtime/proc.go:1976 +0x7c8
runtime.schedule()
	/usr/lib/go-1.6/src/runtime/proc.go:2075 +0x26c
runtime.park_m(0x10d85e10)
	/usr/lib/go-1.6/src/runtime/proc.go:2140 +0x16c
runtime.mcall(0xa16e00)
	/usr/lib/go-1.6/src/runtime/asm_arm.s:183 +0x5c

goroutine 1 [chan receive]:
testing.RunTests(0x828a2c, 0xa15198, 0xbf, 0xbf, 0x0)
	/usr/lib/go-1.6/src/testing/testing.go:583 +0x62c
testing.(*M).Run(0x10eb9f7c, 0x4)
	/usr/lib/go-1.6/src/testing/testing.go:515 +0x8c
main.main()
	github.com/hashicorp/consul/consul/_test/_testmain.go:436 +0x118

goroutine 17 [syscall, 9 minutes, locked to thread]:
runtime.goexit()
	/usr/lib/go-1.6/src/runtime/asm_arm.s:990 +0x4

goroutine 20 [syscall, 9 minutes]:
os/signal.signal_recv(0x0)
	/usr/lib/go-1.6/src/runtime/sigqueue.go:116 +0x190
os/signal.loop()
	/usr/lib/go-1.6/src/os/signal/signal_unix.go:22 +0x14
created by os/signal.init.1
	/usr/lib/go-1.6/src/os/signal/signal_unix.go:28 +0x30

goroutine 7657 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10d0ca20, 0x11051440)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:133 +0x1ec
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 7668 [IO wait]:
net.runtime_pollWait(0xb646bec0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11051a38, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11051a38, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x11051a00, 0x1107a000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb542b018, 0x10c70000)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10dd27c0, 0x1107a000, 0x10000, 0x10000, 0x5e9d98, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10dd27c0, 0x1107a000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10d0cf30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 7682 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x10fe67b0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 7680 [select]:
github.com/hashicorp/consul/consul.(*Server).sessionStats(0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/session_ttl.go:152 +0x1c4
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:276 +0xec4

goroutine 7679 [IO wait]:
net.runtime_pollWait(0xb646b650, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11003378, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11003378, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11003340, 0x0, 0xb5433a10, 0x10c0ad80)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10c6c898, 0xb9a04, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
net.(*TCPListener).Accept(0x10c6c898, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:264 +0x34
github.com/hashicorp/consul/consul.(*Server).listen(0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:60 +0x48
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:273 +0xea8

goroutine 7670 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10d0cf30, 0x5f5e100, 0x0, 0x11051c00, 0x11051b00, 0x10dd29c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 7664 [select]:
github.com/hashicorp/consul/consul.(*Server).lanEventHandler(0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:37 +0x47c
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:261 +0xd28

goroutine 7678 [select]:
github.com/hashicorp/consul/consul.(*Server).wanEventHandler(0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/serf.go:67 +0x2c8
created by github.com/hashicorp/consul/consul.NewServer
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:270 +0xe8c

goroutine 7661 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c500, 0x701e38, 0x6, 0x11000860)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 7672 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10d0cf30, 0x5f5e100, 0x0, 0x11051cc0, 0x11051b00, 0x10dd29d8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 7638 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x10e1ebd0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 7675 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c640, 0x701e38, 0x6, 0x11000ea0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:397 +0x1d8c

goroutine 7684 [select]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x10e40b80, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x10f27680)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 7674 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10d1c640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 7669 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10d0cf30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 7654 [IO wait]:
net.runtime_pollWait(0xb646bbf0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110510b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110510b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).readFrom(0x11051080, 0x1121c000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0xb542b018, 0x10c70000)
	/usr/lib/go-1.6/src/net/fd_unix.go:277 +0x20c
net.(*UDPConn).ReadFromUDP(0x10dd2ad8, 0x1121c000, 0x10000, 0x10000, 0x5e9d98, 0x10000, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:61 +0xe4
net.(*UDPConn).ReadFrom(0x10dd2ad8, 0x1121c000, 0x10000, 0x10000, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/udpsock_posix.go:79 +0xe4
github.com/hashicorp/memberlist.(*Memberlist).udpListen(0x10d0ca20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:284 +0x2ac
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:140 +0xd18

goroutine 7653 [IO wait]:
net.runtime_pollWait(0xb646bce0, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x11051078, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x11051078, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11051040, 0x0, 0xb5433a10, 0x11078280)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10dd2ad0, 0xc0001, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10d0ca20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 7637 [select]:
github.com/hashicorp/consul/consul.(*ConnPool).reap(0x10e7fcb0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:412 +0x3b4
created by github.com/hashicorp/consul/consul.NewPool
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/pool.go:175 +0x1bc

goroutine 7617 [select]:
github.com/hashicorp/raft.(*Raft).runSnapshots(0x10fd2120)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1706 +0x380
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runSnapshots)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:254 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10fd2120, 0x10dd2a70)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 7663 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c500, 0x702998, 0x5, 0x11000900)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 7659 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10d1c500)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 7651 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x110007c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

goroutine 7636 [syscall]:
syscall.Syscall(0x94, 0xf, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/syscall/asm_linux_arm.s:17 +0x8
syscall.Fdatasync(0xf, 0x0, 0x0)
	/usr/lib/go-1.6/src/syscall/zsyscall_linux_arm.go:472 +0x44
github.com/boltdb/bolt.fdatasync(0x11056000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/bolt_linux.go:9 +0x40
github.com/boltdb/bolt.(*Tx).writeMeta(0x10cb4900, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:554 +0x1d0
github.com/boltdb/bolt.(*Tx).Commit(0x10cb4900, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:221 +0x664
github.com/hashicorp/raft-boltdb.(*BoltStore).initialize(0x11078470, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:75 +0x164
github.com/hashicorp/raft-boltdb.NewBoltStore(0x10fe6bd0, 0x21, 0x2, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:51 +0xdc
github.com/hashicorp/consul/consul.(*Server).setupRaft(0x10d2e000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:369 +0xacc
github.com/hashicorp/consul/consul.NewServer(0x10fbc000, 0x10f54884, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:249 +0xad8
github.com/hashicorp/consul/consul.testServerWithConfig(0x10fcc360, 0x828494, 0x0, 0x0, 0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server_test.go:112 +0x144
github.com/hashicorp/consul/consul.TestPreparedQuery_Wrapper(0x10fcc360)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/prepared_query_endpoint_test.go:2281 +0x11c
testing.tRunner(0x10fcc360, 0xa15828)
	/usr/lib/go-1.6/src/testing/testing.go:473 +0xa8
created by testing.RunTests
	/usr/lib/go-1.6/src/testing/testing.go:582 +0x600

goroutine 7683 [select]:
github.com/hashicorp/consul/consul.(*Coordinate).batchUpdate(0x10f54a30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:41 +0x1cc
created by github.com/hashicorp/consul/consul.NewCoordinate
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/coordinate_endpoint.go:33 +0xc4

goroutine 7616 [select]:
github.com/hashicorp/raft.(*Raft).runFSM(0x10fd2120)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:509 +0xd5c
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.runFSM)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:253 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10fd2120, 0x10dd29c8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 7656 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10d0ca20, 0x5f5e100, 0x0, 0x11051480, 0x11051440, 0x10dd2748)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:81 +0x194

goroutine 5288 [chan receive, 3 minutes]:
github.com/hashicorp/raft.(*deferError).Error(0x10db4cc0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/future.go:56 +0xc4
github.com/hashicorp/consul/consul.(*Server).leaderLoop(0x10c769a0, 0x111141c0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:75 +0x224
created by github.com/hashicorp/consul/consul.(*Server).monitorLeadership
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:37 +0xe4

goroutine 7662 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c500, 0x701858, 0x5, 0x110008e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 7615 [syscall]:
syscall.Syscall(0x94, 0x5, 0x0, 0x0, 0x0, 0xfffefff, 0x3000)
	/usr/lib/go-1.6/src/syscall/asm_linux_arm.s:17 +0x8
syscall.Fdatasync(0x5, 0x0, 0x0)
	/usr/lib/go-1.6/src/syscall/zsyscall_linux_arm.go:472 +0x44
github.com/boltdb/bolt.fdatasync(0x10fd2000, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/bolt_linux.go:9 +0x40
github.com/boltdb/bolt.(*Tx).write(0x10f78400, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:517 +0x590
github.com/boltdb/bolt.(*Tx).Commit(0x10f78400, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/boltdb/bolt/tx.go:198 +0x4ec
github.com/hashicorp/raft-boltdb.(*BoltStore).Set(0x10e1fc30, 0x9fc4f0, 0xc, 0xc, 0x110b3f28, 0x8, 0x8, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:199 +0x140
github.com/hashicorp/raft-boltdb.(*BoltStore).SetUint64(0x10e1fc30, 0x9fc4f0, 0xc, 0xc, 0x1, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft-boltdb/bolt_store.go:221 +0x278
github.com/hashicorp/raft.(*Raft).persistVote(0x10fd2120, 0x1, 0x0, 0x1105e000, 0xf, 0x10, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1675 +0x78
github.com/hashicorp/raft.(*Raft).electSelf(0x10fd2120, 0x3)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:1657 +0x2f0
github.com/hashicorp/raft.(*Raft).runCandidate(0x10fd2120)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:668 +0x154
github.com/hashicorp/raft.(*Raft).run(0x10fd2120)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:600 +0xa0
github.com/hashicorp/raft.(*Raft).(github.com/hashicorp/raft.run)-fm()
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/raft.go:252 +0x1c
github.com/hashicorp/raft.(*raftState).goFunc.func1(0x10fd2120, 0x10dd29b8)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:152 +0x4c
created by github.com/hashicorp/raft.(*raftState).goFunc
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/state.go:153 +0x40

goroutine 7676 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c640, 0x701858, 0x5, 0x11000ec0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:398 +0x1dc0

goroutine 7558 [IO wait]:
net.runtime_pollWait(0xb646c0a0, 0x72, 0x10fc6000)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x10d8c878, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x10d8c878, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).Read(0x10d8c840, 0x10fc6000, 0x1000, 0x1000, 0x0, 0xb542b018, 0x10c70000)
	/usr/lib/go-1.6/src/net/fd_unix.go:250 +0x1c4
net.(*conn).Read(0x10d48310, 0x10fc6000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/net.go:172 +0xc8
bufio.(*Reader).fill(0x10f3cae0)
	/usr/lib/go-1.6/src/bufio/bufio.go:97 +0x1c4
bufio.(*Reader).ReadByte(0x10f3cae0, 0x10, 0x0, 0x0)
	/usr/lib/go-1.6/src/bufio/bufio.go:229 +0x8c
github.com/hashicorp/go-msgpack/codec.(*ioDecReader).readn1(0x10d5ab00, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:90 +0x48
github.com/hashicorp/go-msgpack/codec.(*msgpackDecDriver).initReadNext(0x10c0ade0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/msgpack.go:540 +0x44
github.com/hashicorp/go-msgpack/codec.(*Decoder).decode(0x10f3cb10, 0x5d8800, 0x10d5ab40)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:635 +0x54
github.com/hashicorp/go-msgpack/codec.(*Decoder).Decode(0x10f3cb10, 0x5d8800, 0x10d5ab40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/go-msgpack/codec/decode.go:630 +0x74
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).read(0x10f3cab0, 0x5d8800, 0x10d5ab40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:121 +0xbc
github.com/hashicorp/net-rpc-msgpackrpc.(*MsgpackCodec).ReadRequestHeader(0x10f3cab0, 0x10d5ab40, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/net-rpc-msgpackrpc/codec.go:60 +0x40
net/rpc.(*Server).readRequestHeader(0x11003280, 0xb40c00b8, 0x10f3cab0, 0x0, 0x0, 0x10d5ab40, 0x10d48300, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/rpc/server.go:576 +0x80
net/rpc.(*Server).readRequest(0x11003280, 0xb40c00b8, 0x10f3cab0, 0x15e10, 0x110032d0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/usr/lib/go-1.6/src/net/rpc/server.go:543 +0x84
net/rpc.(*Server).ServeRequest(0x11003280, 0xb40c00b8, 0x10f3cab0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/rpc/server.go:486 +0x4c
github.com/hashicorp/consul/consul.(*Server).handleConsulConn(0x10c76e00, 0xb4a100f8, 0x10d48310)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:178 +0xfc
github.com/hashicorp/consul/consul.(*Server).handleConn(0x10c76e00, 0xb4a100f8, 0x10d48310, 0x2a6200)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:103 +0x3e4
created by github.com/hashicorp/consul/consul.(*Server).listen
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/rpc.go:69 +0x164

goroutine 7639 [select]:
github.com/hashicorp/consul/consul.(*RaftLayer).Accept(0x110c50c0, 0x0, 0x0, 0x0, 0x0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/raft_rpc.go:57 +0x138
github.com/hashicorp/raft.(*NetworkTransport).listen(0x10f3a6e0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:362 +0x50
created by github.com/hashicorp/raft.NewNetworkTransportWithLogger
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/raft/net_transport.go:154 +0x270

goroutine 7673 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReap(0x10d1c640)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1388 +0x26c
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:395 +0x1d3c

goroutine 7652 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10f78980)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 7666 [select]:
github.com/hashicorp/serf/serf.(*Snapshotter).stream(0x10f78380)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:187 +0x998
created by github.com/hashicorp/serf/serf.NewSnapshotter
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/snapshot.go:129 +0x624

goroutine 7660 [select]:
github.com/hashicorp/serf/serf.(*Serf).handleReconnect(0x10d1c500)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1404 +0xe0
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:396 +0x1d58

goroutine 7655 [select]:
github.com/hashicorp/memberlist.(*Memberlist).udpHandler(0x10d0ca20)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:370 +0x360
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:141 +0xd34

goroutine 7650 [select]:
github.com/hashicorp/consul/consul.(*Server).monitorLeadership(0x10c76e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/leader.go:33 +0x1a0
created by github.com/hashicorp/consul/consul.(*Server).setupRaft
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/consul/consul/server.go:426 +0x8a4

goroutine 7671 [select]:
github.com/hashicorp/memberlist.(*Memberlist).pushPullTrigger(0x10d0cf30, 0x11051b00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:133 +0x1ec
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:87 +0x288

goroutine 7658 [select]:
github.com/hashicorp/memberlist.(*Memberlist).triggerFunc(0x10d0ca20, 0x5f5e100, 0x0, 0x110514c0, 0x11051440, 0x10dd2758)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:115 +0x150
created by github.com/hashicorp/memberlist.(*Memberlist).schedule
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/state.go:93 +0x360

goroutine 7677 [select]:
github.com/hashicorp/serf/serf.(*Serf).checkQueueDepth(0x10d1c640, 0x702998, 0x5, 0x11000ee0)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:1504 +0x520
created by github.com/hashicorp/serf/serf.Create
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/serf.go:399 +0x1df4

goroutine 7667 [IO wait]:
net.runtime_pollWait(0xb646c028, 0x72, 0x0)
	/usr/lib/go-1.6/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0x110519b8, 0x72, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:73 +0x34
net.(*pollDesc).WaitRead(0x110519b8, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/fd_poll_runtime.go:78 +0x30
net.(*netFD).accept(0x11051980, 0x0, 0xb5433a10, 0x11078410)
	/usr/lib/go-1.6/src/net/fd_unix.go:426 +0x21c
net.(*TCPListener).AcceptTCP(0x10dd27b8, 0x0, 0x0, 0x0)
	/usr/lib/go-1.6/src/net/tcpsock_posix.go:254 +0x4c
github.com/hashicorp/memberlist.(*Memberlist).tcpListen(0x10d0cf30)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/net.go:188 +0x2c
created by github.com/hashicorp/memberlist.newMemberlist
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/memberlist/memberlist.go:139 +0xcfc

goroutine 7665 [select]:
github.com/hashicorp/serf/serf.(*serfQueries).stream(0x11000e00)
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:80 +0x248
created by github.com/hashicorp/serf/serf.newSerfQueries
	/<<PKGBUILDDIR>>/obj-arm-linux-gnueabihf/src/github.com/hashicorp/serf/serf/internal_query.go:73 +0x110

trap    0x0
error   0x0
oldmask 0x0
r0      0xa1742c
r1      0x0
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x16a802
r7      0xf0
r8      0x142da2
r9      0x0
r10     0xa16e70
fp      0xa16d24
ip      0x10c1b6f4
sp      0xbed793dc
lr      0x3d878
pc      0x7332c
cpsr    0xa0000010
fault   0x0
*** Test killed with quit: ran too long (10m0s).
FAIL	github.com/hashicorp/consul/consul	600.137s
=== RUN   TestTemplate_Compile
--- PASS: TestTemplate_Compile (0.01s)
=== RUN   TestTemplate_Render
--- PASS: TestTemplate_Render (0.01s)
=== RUN   TestWalk_ServiceQuery
--- PASS: TestWalk_ServiceQuery (0.00s)
=== RUN   TestWalk_Visitor_Errors
--- PASS: TestWalk_Visitor_Errors (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/prepared_query	0.114s
=== RUN   TestDelay
--- PASS: TestDelay (0.50s)
=== RUN   TestGraveyard_Lifecycle
--- PASS: TestGraveyard_Lifecycle (0.00s)
=== RUN   TestGraveyard_GC_Trigger
--- PASS: TestGraveyard_GC_Trigger (0.10s)
=== RUN   TestGraveyard_Snapshot_Restore
--- PASS: TestGraveyard_Snapshot_Restore (0.00s)
=== RUN   TestNotifyGroup
--- PASS: TestNotifyGroup (0.00s)
=== RUN   TestNotifyGroup_Clear
--- PASS: TestNotifyGroup_Clear (0.00s)
=== RUN   TestPreparedQueryIndex_FromObject
--- PASS: TestPreparedQueryIndex_FromObject (0.00s)
=== RUN   TestPreparedQueryIndex_FromArgs
--- PASS: TestPreparedQueryIndex_FromArgs (0.00s)
=== RUN   TestPreparedQueryIndex_PrefixFromArgs
--- PASS: TestPreparedQueryIndex_PrefixFromArgs (0.00s)
=== RUN   TestStateStore_PreparedQuery_isUUID
--- PASS: TestStateStore_PreparedQuery_isUUID (0.01s)
=== RUN   TestStateStore_PreparedQuerySet_PreparedQueryGet
--- PASS: TestStateStore_PreparedQuerySet_PreparedQueryGet (0.01s)
=== RUN   TestStateStore_PreparedQueryDelete
--- PASS: TestStateStore_PreparedQueryDelete (0.00s)
=== RUN   TestStateStore_PreparedQueryResolve
--- PASS: TestStateStore_PreparedQueryResolve (0.01s)
=== RUN   TestStateStore_PreparedQueryList
--- PASS: TestStateStore_PreparedQueryList (0.00s)
=== RUN   TestStateStore_PreparedQuery_Snapshot_Restore
--- PASS: TestStateStore_PreparedQuery_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_PreparedQuery_Watches
--- PASS: TestStateStore_PreparedQuery_Watches (0.01s)
=== RUN   TestStateStore_Schema
--- PASS: TestStateStore_Schema (0.00s)
=== RUN   TestStateStore_Restore_Abort
--- PASS: TestStateStore_Restore_Abort (0.00s)
=== RUN   TestStateStore_maxIndex
--- PASS: TestStateStore_maxIndex (0.00s)
=== RUN   TestStateStore_indexUpdateMaxTxn
--- PASS: TestStateStore_indexUpdateMaxTxn (0.00s)
=== RUN   TestStateStore_GC
--- PASS: TestStateStore_GC (0.06s)
=== RUN   TestStateStore_ReapTombstones
--- PASS: TestStateStore_ReapTombstones (0.00s)
=== RUN   TestStateStore_GetWatches
--- PASS: TestStateStore_GetWatches (0.00s)
=== RUN   TestStateStore_EnsureRegistration
--- PASS: TestStateStore_EnsureRegistration (0.01s)
=== RUN   TestStateStore_EnsureRegistration_Restore
--- PASS: TestStateStore_EnsureRegistration_Restore (0.01s)
=== RUN   TestStateStore_EnsureRegistration_Watches
--- PASS: TestStateStore_EnsureRegistration_Watches (0.01s)
=== RUN   TestStateStore_EnsureNode
--- PASS: TestStateStore_EnsureNode (0.00s)
=== RUN   TestStateStore_GetNodes
--- PASS: TestStateStore_GetNodes (0.00s)
=== RUN   TestStateStore_DeleteNode
--- PASS: TestStateStore_DeleteNode (0.00s)
=== RUN   TestStateStore_Node_Snapshot
--- PASS: TestStateStore_Node_Snapshot (0.00s)
=== RUN   TestStateStore_Node_Watches
--- PASS: TestStateStore_Node_Watches (0.01s)
=== RUN   TestStateStore_EnsureService
--- PASS: TestStateStore_EnsureService (0.00s)
=== RUN   TestStateStore_Services
--- PASS: TestStateStore_Services (0.00s)
=== RUN   TestStateStore_ServiceNodes
--- PASS: TestStateStore_ServiceNodes (0.00s)
=== RUN   TestStateStore_ServiceTagNodes
--- PASS: TestStateStore_ServiceTagNodes (0.00s)
=== RUN   TestStateStore_ServiceTagNodes_MultipleTags
--- PASS: TestStateStore_ServiceTagNodes_MultipleTags (0.00s)
=== RUN   TestStateStore_DeleteService
--- PASS: TestStateStore_DeleteService (0.00s)
=== RUN   TestStateStore_Service_Snapshot
--- PASS: TestStateStore_Service_Snapshot (0.00s)
=== RUN   TestStateStore_Service_Watches
--- PASS: TestStateStore_Service_Watches (0.00s)
=== RUN   TestStateStore_EnsureCheck
--- PASS: TestStateStore_EnsureCheck (0.00s)
=== RUN   TestStateStore_EnsureCheck_defaultStatus
--- PASS: TestStateStore_EnsureCheck_defaultStatus (0.00s)
=== RUN   TestStateStore_NodeChecks
--- PASS: TestStateStore_NodeChecks (0.00s)
=== RUN   TestStateStore_ServiceChecks
--- PASS: TestStateStore_ServiceChecks (0.00s)
=== RUN   TestStateStore_ChecksInState
--- PASS: TestStateStore_ChecksInState (0.00s)
=== RUN   TestStateStore_DeleteCheck
--- PASS: TestStateStore_DeleteCheck (0.00s)
=== RUN   TestStateStore_CheckServiceNodes
--- PASS: TestStateStore_CheckServiceNodes (0.01s)
=== RUN   TestStateStore_CheckServiceTagNodes
--- PASS: TestStateStore_CheckServiceTagNodes (0.00s)
=== RUN   TestStateStore_Check_Snapshot
--- PASS: TestStateStore_Check_Snapshot (0.01s)
=== RUN   TestStateStore_Check_Watches
--- PASS: TestStateStore_Check_Watches (0.00s)
=== RUN   TestStateStore_NodeInfo_NodeDump
--- PASS: TestStateStore_NodeInfo_NodeDump (0.01s)
=== RUN   TestStateStore_KVSSet_KVSGet
--- PASS: TestStateStore_KVSSet_KVSGet (0.00s)
=== RUN   TestStateStore_KVSList
--- PASS: TestStateStore_KVSList (0.00s)
=== RUN   TestStateStore_KVSListKeys
--- PASS: TestStateStore_KVSListKeys (0.00s)
=== RUN   TestStateStore_KVSDelete
--- PASS: TestStateStore_KVSDelete (0.00s)
=== RUN   TestStateStore_KVSDeleteCAS
--- PASS: TestStateStore_KVSDeleteCAS (0.00s)
=== RUN   TestStateStore_KVSSetCAS
--- PASS: TestStateStore_KVSSetCAS (0.00s)
=== RUN   TestStateStore_KVSDeleteTree
--- PASS: TestStateStore_KVSDeleteTree (0.00s)
=== RUN   TestStateStore_KVSLockDelay
--- PASS: TestStateStore_KVSLockDelay (0.00s)
=== RUN   TestStateStore_KVSLock
--- PASS: TestStateStore_KVSLock (0.00s)
=== RUN   TestStateStore_KVSUnlock
--- PASS: TestStateStore_KVSUnlock (0.00s)
=== RUN   TestStateStore_KVS_Snapshot_Restore
--- PASS: TestStateStore_KVS_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_KVS_Watches
--- PASS: TestStateStore_KVS_Watches (0.01s)
=== RUN   TestStateStore_Tombstone_Snapshot_Restore
--- PASS: TestStateStore_Tombstone_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_SessionCreate_SessionGet
--- PASS: TestStateStore_SessionCreate_SessionGet (0.01s)
=== RUN   TestStateStore_NodeSessions
--- PASS: TestStateStore_NodeSessions (0.00s)
=== RUN   TestStateStore_SessionDestroy
--- PASS: TestStateStore_SessionDestroy (0.00s)
=== RUN   TestStateStore_Session_Snapshot_Restore
--- PASS: TestStateStore_Session_Snapshot_Restore (0.01s)
=== RUN   TestStateStore_Session_Watches
--- PASS: TestStateStore_Session_Watches (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteNode
--- PASS: TestStateStore_Session_Invalidate_DeleteNode (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteService
--- PASS: TestStateStore_Session_Invalidate_DeleteService (0.01s)
=== RUN   TestStateStore_Session_Invalidate_Critical_Check
--- PASS: TestStateStore_Session_Invalidate_Critical_Check (0.00s)
=== RUN   TestStateStore_Session_Invalidate_DeleteCheck
--- PASS: TestStateStore_Session_Invalidate_DeleteCheck (0.00s)
=== RUN   TestStateStore_Session_Invalidate_Key_Unlock_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Unlock_Behavior (0.01s)
=== RUN   TestStateStore_Session_Invalidate_Key_Delete_Behavior
--- PASS: TestStateStore_Session_Invalidate_Key_Delete_Behavior (0.00s)
=== RUN   TestStateStore_Session_Invalidate_PreparedQuery_Delete
--- PASS: TestStateStore_Session_Invalidate_PreparedQuery_Delete (0.01s)
=== RUN   TestStateStore_ACLSet_ACLGet
--- PASS: TestStateStore_ACLSet_ACLGet (0.00s)
=== RUN   TestStateStore_ACLList
--- PASS: TestStateStore_ACLList (0.00s)
=== RUN   TestStateStore_ACLDelete
--- PASS: TestStateStore_ACLDelete (0.00s)
=== RUN   TestStateStore_ACL_Snapshot_Restore
--- PASS: TestStateStore_ACL_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_ACL_Watches
--- PASS: TestStateStore_ACL_Watches (0.00s)
=== RUN   TestStateStore_Coordinate_Updates
--- PASS: TestStateStore_Coordinate_Updates (0.00s)
=== RUN   TestStateStore_Coordinate_Cleanup
--- PASS: TestStateStore_Coordinate_Cleanup (0.00s)
=== RUN   TestStateStore_Coordinate_Snapshot_Restore
--- PASS: TestStateStore_Coordinate_Snapshot_Restore (0.00s)
=== RUN   TestStateStore_Coordinate_Watches
--- PASS: TestStateStore_Coordinate_Watches (0.00s)
=== RUN   TestTombstoneGC_invalid
--- PASS: TestTombstoneGC_invalid (0.00s)
=== RUN   TestTombstoneGC
--- PASS: TestTombstoneGC (0.03s)
=== RUN   TestTombstoneGC_Expire
--- PASS: TestTombstoneGC_Expire (0.02s)
=== RUN   TestWatch_FullTableWatch
--- PASS: TestWatch_FullTableWatch (0.00s)
=== RUN   TestWatch_DumbWatchManager
--- PASS: TestWatch_DumbWatchManager (0.00s)
=== RUN   TestWatch_PrefixWatchManager
--- PASS: TestWatch_PrefixWatchManager (0.00s)
=== RUN   TestWatch_PrefixWatch
--- PASS: TestWatch_PrefixWatch (0.00s)
=== RUN   TestWatch_MultiWatch
--- PASS: TestWatch_MultiWatch (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/state	1.165s
=== RUN   TestStructs_PreparedQuery_GetACLPrefix
--- PASS: TestStructs_PreparedQuery_GetACLPrefix (0.00s)
=== RUN   TestEncodeDecode
--- PASS: TestEncodeDecode (0.00s)
=== RUN   TestStructs_Implements
--- PASS: TestStructs_Implements (0.00s)
=== RUN   TestStructs_ServiceNode_Clone
--- PASS: TestStructs_ServiceNode_Clone (0.00s)
=== RUN   TestStructs_ServiceNode_Conversions
--- PASS: TestStructs_ServiceNode_Conversions (0.00s)
=== RUN   TestStructs_NodeService_IsSame
--- PASS: TestStructs_NodeService_IsSame (0.00s)
=== RUN   TestStructs_HealthCheck_IsSame
--- PASS: TestStructs_HealthCheck_IsSame (0.00s)
=== RUN   TestStructs_CheckServiceNodes_Shuffle
--- PASS: TestStructs_CheckServiceNodes_Shuffle (0.02s)
=== RUN   TestStructs_CheckServiceNodes_Filter
--- PASS: TestStructs_CheckServiceNodes_Filter (0.00s)
=== RUN   TestStructs_DirEntry_Clone
--- PASS: TestStructs_DirEntry_Clone (0.00s)
PASS
ok  	github.com/hashicorp/consul/consul/structs	0.068s
=== RUN   TestRandomStagger
--- PASS: TestRandomStagger (0.00s)
=== RUN   TestRateScaledInterval
--- PASS: TestRateScaledInterval (0.00s)
=== RUN   TestStrContains
--- PASS: TestStrContains (0.00s)
PASS
ok  	github.com/hashicorp/consul/lib	0.019s
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/consul/testutil	0.061s
=== RUN   TestConfig_AppendCA_None
--- PASS: TestConfig_AppendCA_None (0.00s)
=== RUN   TestConfig_CACertificate_Valid
--- PASS: TestConfig_CACertificate_Valid (0.00s)
=== RUN   TestConfig_KeyPair_None
--- PASS: TestConfig_KeyPair_None (0.00s)
=== RUN   TestConfig_KeyPair_Valid
--- PASS: TestConfig_KeyPair_Valid (0.01s)
=== RUN   TestConfig_OutgoingTLS_MissingCA
--- PASS: TestConfig_OutgoingTLS_MissingCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_OnlyCA
--- PASS: TestConfig_OutgoingTLS_OnlyCA (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyOutgoing
--- PASS: TestConfig_OutgoingTLS_VerifyOutgoing (0.00s)
=== RUN   TestConfig_OutgoingTLS_ServerName
--- PASS: TestConfig_OutgoingTLS_ServerName (0.00s)
=== RUN   TestConfig_OutgoingTLS_VerifyHostname
--- PASS: TestConfig_OutgoingTLS_VerifyHostname (0.00s)
=== RUN   TestConfig_OutgoingTLS_WithKeyPair
--- PASS: TestConfig_OutgoingTLS_WithKeyPair (0.01s)
=== RUN   TestConfig_outgoingWrapper_OK
--- PASS: TestConfig_outgoingWrapper_OK (0.26s)
=== RUN   TestConfig_outgoingWrapper_BadDC
--- PASS: TestConfig_outgoingWrapper_BadDC (0.25s)
=== RUN   TestConfig_outgoingWrapper_BadCert
--- FAIL: TestConfig_outgoingWrapper_BadCert (0.02s)
	config_test.go:316: should get hostname err: x509: certificate has expired or is not yet valid
=== RUN   TestConfig_wrapTLS_OK
--- FAIL: TestConfig_wrapTLS_OK (0.15s)
	config_test.go:342: wrapTLS err: x509: certificate has expired or is not yet valid
=== RUN   TestConfig_wrapTLS_BadCert
--- PASS: TestConfig_wrapTLS_BadCert (0.25s)
=== RUN   TestConfig_IncomingTLS
--- PASS: TestConfig_IncomingTLS (0.01s)
=== RUN   TestConfig_IncomingTLS_MissingCA
--- PASS: TestConfig_IncomingTLS_MissingCA (0.01s)
=== RUN   TestConfig_IncomingTLS_MissingKey
--- PASS: TestConfig_IncomingTLS_MissingKey (0.00s)
=== RUN   TestConfig_IncomingTLS_NoVerify
--- PASS: TestConfig_IncomingTLS_NoVerify (0.00s)
FAIL
FAIL	github.com/hashicorp/consul/tlsutil	1.011s
=== RUN   TestKeyWatch
--- SKIP: TestKeyWatch (0.00s)
	funcs_test.go:19: 
=== RUN   TestKeyPrefixWatch
--- SKIP: TestKeyPrefixWatch (0.00s)
	funcs_test.go:73: 
=== RUN   TestServicesWatch
--- SKIP: TestServicesWatch (0.00s)
	funcs_test.go:129: 
=== RUN   TestNodesWatch
--- SKIP: TestNodesWatch (0.00s)
	funcs_test.go:172: 
=== RUN   TestServiceWatch
--- SKIP: TestServiceWatch (0.00s)
	funcs_test.go:221: 
=== RUN   TestChecksWatch_State
--- SKIP: TestChecksWatch_State (0.00s)
	funcs_test.go:270: 
=== RUN   TestChecksWatch_Service
--- SKIP: TestChecksWatch_Service (0.00s)
	funcs_test.go:330: 
=== RUN   TestEventWatch
--- SKIP: TestEventWatch (0.00s)
	funcs_test.go:398: 
=== RUN   TestRun_Stop
--- PASS: TestRun_Stop (0.01s)
=== RUN   TestParseBasic
--- PASS: TestParseBasic (0.00s)
=== RUN   TestParse_exempt
--- PASS: TestParse_exempt (0.00s)
PASS
ok  	github.com/hashicorp/consul/watch	0.054s
dh_auto_test: go test -v github.com/hashicorp/consul github.com/hashicorp/consul/acl github.com/hashicorp/consul/api github.com/hashicorp/consul/command github.com/hashicorp/consul/command/agent github.com/hashicorp/consul/consul github.com/hashicorp/consul/consul/prepared_query github.com/hashicorp/consul/consul/state github.com/hashicorp/consul/consul/structs github.com/hashicorp/consul/lib github.com/hashicorp/consul/testutil github.com/hashicorp/consul/tlsutil github.com/hashicorp/consul/watch returned exit code 1
debian/rules:7: recipe for target 'override_dh_auto_test' failed
make[1]: [override_dh_auto_test] Error 1 (ignored)
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
 fakeroot debian/rules binary-arch
dh binary-arch --buildsystem=golang --with=golang
   dh_testroot -a -O--buildsystem=golang
   dh_prep -a -O--buildsystem=golang
   dh_auto_install -a -O--buildsystem=golang
	mkdir -p /<<BUILDDIR>>/consul-0.6.4\~dfsg/debian/tmp/usr
	cp -r bin /<<BUILDDIR>>/consul-0.6.4\~dfsg/debian/tmp/usr
	mkdir -p /<<BUILDDIR>>/consul-0.6.4\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
	cp -r -T src/github.com/hashicorp/consul /<<BUILDDIR>>/consul-0.6.4\~dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/consul
   debian/rules override_dh_install
make[1]: Entering directory '/<<PKGBUILDDIR>>'
## Do not install "github.com/hashicorp/consul/command" to -dev package.
dh_install -X/src/github.com/hashicorp/consul/command
make[1]: Leaving directory '/<<PKGBUILDDIR>>'
   dh_installdocs -a -O--buildsystem=golang
   dh_installchangelogs -a -O--buildsystem=golang
   dh_perl -a -O--buildsystem=golang
   dh_link -a -O--buildsystem=golang
   dh_strip_nondeterminism -a -O--buildsystem=golang
   dh_compress -a -O--buildsystem=golang
   dh_fixperms -a -O--buildsystem=golang
   dh_strip -a -O--buildsystem=golang
   dh_makeshlibs -a -O--buildsystem=golang
   dh_shlibdeps -a -O--buildsystem=golang
   dh_installdeb -a -O--buildsystem=golang
   dh_golang -a -O--buildsystem=golang
   dh_gencontrol -a -O--buildsystem=golang
dpkg-gencontrol: warning: File::FcntlLock not available; using flock which is not NFS-safe
   dh_md5sums -a -O--buildsystem=golang
   dh_builddeb -u-Zxz -a -O--buildsystem=golang
dpkg-deb: building package 'consul' in '../consul_0.6.4~dfsg-3_armhf.deb'.
 dpkg-genchanges --build=any -mRaspbian wandboard test autobuilder <root@raspbian.org> >../consul_0.6.4~dfsg-3_armhf.changes
dpkg-genchanges: info: binary-only arch-specific upload (source code and arch-indep packages not included)
 dpkg-source --after-build consul-0.6.4~dfsg
dpkg-buildpackage: info: binary-only upload (no source included)
--------------------------------------------------------------------------------
Build finished at 20160623-0755

Finished
--------

I: Built successfully

+------------------------------------------------------------------------------+
| Post Build Chroot                                                            |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Changes                                                                      |
+------------------------------------------------------------------------------+


consul_0.6.4~dfsg-3_armhf.changes:
----------------------------------

Format: 1.8
Date: Sat, 18 Jun 2016 20:47:29 +1000
Source: consul
Binary: golang-github-hashicorp-consul-dev consul
Architecture: armhf
Version: 0.6.4~dfsg-3
Distribution: stretch-staging
Urgency: medium
Maintainer: Raspbian wandboard test autobuilder <root@raspbian.org>
Changed-By: Dmitry Smirnov <onlyjob@debian.org>
Description:
 consul     - tool for service discovery, monitoring and configuration
 golang-github-hashicorp-consul-dev - tool for service discovery, monitoring and configuration (source)
Changes:
 consul (0.6.4~dfsg-3) unstable; urgency=medium
 .
   * Added "golang-github-hashicorp-go-cleanhttp-dev" to -dev's Depends.
Checksums-Sha1:
 b998d5461621f38980d14cb2d95b1084db293862 2907574 consul_0.6.4~dfsg-3_armhf.deb
Checksums-Sha256:
 8226783f625661583a63b86e29c39b6a66727d3170acd2ab69b00b00def5152c 2907574 consul_0.6.4~dfsg-3_armhf.deb
Files:
 cb83120fa915ceb99ac3a202e51926f0 2907574 devel extra consul_0.6.4~dfsg-3_armhf.deb

+------------------------------------------------------------------------------+
| Package contents                                                             |
+------------------------------------------------------------------------------+


consul_0.6.4~dfsg-3_armhf.deb
-----------------------------

 new debian package, version 2.0.
 size 2907574 bytes: control archive=1873 bytes.
    4050 bytes,    34 lines      control              
     257 bytes,     4 lines      md5sums              
 Package: consul
 Version: 0.6.4~dfsg-3
 Architecture: armhf
 Maintainer: Debian Go Packaging Team <pkg-go-maintainers@lists.alioth.debian.org>
 Installed-Size: 11844
 Depends: libc6 (>= 2.4)
 Built-Using: golang-1.6 (= 1.6.2-1+rpi1), golang-dns (= 0.0~git20160414.0.89d9c5e-1), golang-github-armon-circbuf (= 0.0~git20150827.0.bbbad09-1), golang-github-armon-go-metrics (= 0.0~git20160307.0.f303b03-1), golang-github-armon-go-radix (= 0.0~git20150602.0.fbd82e8-1), golang-github-bgentry-speakeasy (= 0.0~git20150902.0.36e9cfd-1), golang-github-boltdb-bolt (= 1.2.1-1), golang-github-datadog-datadog-go (= 0.0~git20150930.0.b050cd8-1), golang-github-docker-go-units (= 0.3.0-1), golang-github-elazarl-go-bindata-assetfs (= 0.0~git20151224.0.57eb5e1-1), golang-github-fsouza-go-dockerclient (= 0.0+git20160316-2), golang-github-hashicorp-errwrap (= 0.0~git20141028.0.7554cd9-1), golang-github-hashicorp-go-checkpoint (= 0.0~git20151022.0.e4b2dc3-1), golang-github-hashicorp-go-cleanhttp (= 0.0~git20160217.0.875fb67-1), golang-github-hashicorp-go-immutable-radix (= 0.0~git20160222.0.8e8ed81-1), golang-github-hashicorp-go-memdb (= 0.0~git20160301.0.98f52f5-1), golang-github-hashicorp-go-msgpack (= 0.0~git20150518.0.fa3f638-1), golang-github-hashicorp-go-multierror (= 0.0~git20150916.0.d30f099-1), golang-github-hashicorp-go-reap (= 0.0~git20160113.0.2d85522-1), golang-github-hashicorp-go-syslog (= 0.0~git20150218.0.42a2b57-1), golang-github-hashicorp-go-uuid (= 0.0~git20160311.0.d610f28-1), golang-github-hashicorp-golang-lru (= 0.0~git20160207.0.a0d98a5-1), golang-github-hashicorp-hcl (= 0.0~git20160607.0.d7400db-1), golang-github-hashicorp-hil (= 0.0~git20160326.0.40da60f-1), golang-github-hashicorp-logutils (= 0.0~git20150609.0.0dc08b1-1), golang-github-hashicorp-memberlist (= 0.0~git20160329.0.88ac4de-1), golang-github-hashicorp-net-rpc-msgpackrpc (= 0.0~git20151116.0.a14192a-1), golang-github-hashicorp-raft (= 0.0~git20160317.0.3359516-1), golang-github-hashicorp-raft-boltdb (= 0.0~git20150201.d1e82c1-1), golang-github-hashicorp-scada-client (= 0.0~git20150828.0.84989fd-1), golang-github-hashicorp-serf (= 0.7.0~ds1-1), golang-github-hashicorp-yamux (= 0.0~git20151129.0.df94978-1), golang-github-inconshreveable-muxado (= 0.0~git20140312.0.f693c7e-1), golang-github-mattn-go-isatty (= 0.0.1-1), golang-github-mitchellh-cli (= 0.0~git20160203.0.5c87c51-1), golang-github-mitchellh-copystructure (= 0.0~git20160128.0.80adcec-1), golang-github-mitchellh-mapstructure (= 0.0~git20160212.0.d2dd026-1), golang-github-mitchellh-reflectwalk (= 0.0~git20150527.0.eecf4c7-1), golang-github-ryanuber-columnize (= 2.1.0-1), golang-golang-x-net-dev (= 1:0.0+git20160518.b3e9c8f+dfsg-2), golang-golang-x-sys (= 0.0~git20160611.0.7f918dd-1), golang-logrus (= 0.10.0-2), runc (= 0.1.0+dfsg1-1)
 Section: devel
 Priority: extra
 Homepage: https://github.com/hashicorp/consul
 Description: tool for service discovery, monitoring and configuration
  Consul is a tool for service discovery and configuration. Consul is
  distributed, highly available, and extremely scalable.
  .
  Consul provides several key features:
  .
   - Service Discovery - Consul makes it simple for services to register
     themselves and to discover other services via a DNS or HTTP interface.
     External services such as SaaS providers can be registered as well.
  .
   - Health Checking - Health Checking enables Consul to quickly alert operators
     about any issues in a cluster. The integration with service discovery
     prevents routing traffic to unhealthy hosts and enables service level
     circuit breakers.
  .
   - Key/Value Storage - A flexible key/value store enables storing dynamic
     configuration, feature flagging, coordination, leader election and more. The
     simple HTTP API makes it easy to use anywhere.
  .
   - Multi-Datacenter - Consul is built to be datacenter aware, and can support
     any number of regions without complex configuration.
  .
  Consul runs on Linux, Mac OS X, and Windows. It is recommended to run the
  Consul servers only on Linux, however.

drwxr-xr-x root/root         0 2016-06-23 07:54 ./
drwxr-xr-x root/root         0 2016-06-23 07:54 ./usr/
drwxr-xr-x root/root         0 2016-06-23 07:54 ./usr/bin/
-rwxr-xr-x root/root  12090488 2016-06-23 07:54 ./usr/bin/consul
drwxr-xr-x root/root         0 2016-06-23 07:54 ./usr/share/
drwxr-xr-x root/root         0 2016-06-23 07:54 ./usr/share/doc/
drwxr-xr-x root/root         0 2016-06-23 07:54 ./usr/share/doc/consul/
-rw-r--r-- root/root       585 2016-06-18 10:48 ./usr/share/doc/consul/changelog.Debian.gz
-rw-r--r-- root/root     10550 2016-03-16 16:42 ./usr/share/doc/consul/changelog.gz
-rw-r--r-- root/root     16842 2016-04-11 06:58 ./usr/share/doc/consul/copyright


+------------------------------------------------------------------------------+
| Post Build                                                                   |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Cleanup                                                                      |
+------------------------------------------------------------------------------+

Purging /<<BUILDDIR>>
Not cleaning session: cloned chroot in use

+------------------------------------------------------------------------------+
| Summary                                                                      |
+------------------------------------------------------------------------------+

Build Architecture: armhf
Build-Space: 88316
Build-Time: 968
Distribution: stretch-staging
Host Architecture: armhf
Install-Time: 687
Job: consul_0.6.4~dfsg-3
Machine Architecture: armhf
Package: consul
Package-Time: 1760
Source-Version: 0.6.4~dfsg-3
Space: 88316
Status: successful
Version: 0.6.4~dfsg-3
--------------------------------------------------------------------------------
Finished at 20160623-0755
Build needed 00:29:20, 88316k disc space