Raspbian Package Auto-Building

Build log for nomad (0.3.1+dfsg-1) on armhf

nomad0.3.1+dfsg-1armhf → 2016-04-14 13:04:41

sbuild (Debian sbuild) 0.66.0 (04 Oct 2015) on bm-wb-03

+==============================================================================+
| nomad 0.3.1+dfsg-1 (armhf)                                 14 Apr 2016 12:36 |
+==============================================================================+

Package: nomad
Version: 0.3.1+dfsg-1
Source Version: 0.3.1+dfsg-1
Distribution: stretch-staging
Machine Architecture: armhf
Host Architecture: armhf
Build Architecture: armhf

I: NOTICE: Log filtering will replace 'build/nomad-sIH1kI/nomad-0.3.1+dfsg' with '<<PKGBUILDDIR>>'
I: NOTICE: Log filtering will replace 'build/nomad-sIH1kI' with '<<BUILDDIR>>'
I: NOTICE: Log filtering will replace 'var/lib/schroot/mount/stretch-staging-armhf-sbuild-065db6da-3980-4d70-895c-32e139fef513' with '<<CHROOT>>'

+------------------------------------------------------------------------------+
| Update chroot                                                                |
+------------------------------------------------------------------------------+

Get:1 http://172.17.0.1/private stretch-staging InRelease [11.3 kB]
Get:2 http://172.17.0.1/private stretch-staging/main Sources [8909 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf Packages [11.0 MB]
Fetched 19.9 MB in 22s (895 kB/s)
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Fetch source files                                                           |
+------------------------------------------------------------------------------+


Check APT
---------

Checking available source versions...

Download source files with APT
------------------------------

Reading package lists...
NOTICE: 'nomad' packaging is maintained in the 'Git' version control system at:
git://anonscm.debian.org/pkg-go/packages/nomad.git
Please use:
git clone git://anonscm.debian.org/pkg-go/packages/nomad.git
to retrieve the latest (possibly unreleased) updates to the package.
Need to get 7360 kB of source archives.
Get:1 http://172.17.0.1/private stretch-staging/main nomad 0.3.1+dfsg-1 (dsc) [4177 B]
Get:2 http://172.17.0.1/private stretch-staging/main nomad 0.3.1+dfsg-1 (tar) [7346 kB]
Get:3 http://172.17.0.1/private stretch-staging/main nomad 0.3.1+dfsg-1 (diff) [10.1 kB]
Fetched 7360 kB in 0s (8357 kB/s)
Download complete and in download only mode

Check architectures
-------------------


Check dependencies
------------------

Merged Build-Depends: build-essential, fakeroot
Filtered Build-Depends: build-essential, fakeroot
dpkg-deb: building package 'sbuild-build-depends-core-dummy' in '/<<BUILDDIR>>/resolver-WDC8FO/apt_archive/sbuild-build-depends-core-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Sources [214 B]
Get:5 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ Packages [527 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install core build dependencies (apt-based resolver)                         |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following NEW packages will be installed:
  sbuild-build-depends-core-dummy
0 upgraded, 1 newly installed, 0 to remove and 57 not upgraded.
Need to get 0 B/768 B of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-WDC8FO/apt_archive ./ sbuild-build-depends-core-dummy 0.invalid.0 [768 B]
debconf: delaying package configuration, since apt-utils is not installed
Selecting previously unselected package sbuild-build-depends-core-dummy.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13786 files and directories currently installed.)
Preparing to unpack .../sbuild-build-depends-core-dummy.deb ...
Unpacking sbuild-build-depends-core-dummy (0.invalid.0) ...
Setting up sbuild-build-depends-core-dummy (0.invalid.0) ...
W: No sandbox user '_apt' on the system, can not drop privileges
Merged Build-Depends: debhelper (>= 9), dh-systemd, dh-golang, golang-go, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-boltdb-bolt-dev, golang-github-coreos-go-systemd-dev, golang-github-datadog-datadog-go-dev, golang-github-docker-docker-dev, golang-github-docker-go-units-dev, golang-github-dustin-go-humanize-dev, golang-github-fsouza-go-dockerclient-dev (>= 0.0+git20160316~), golang-github-go-ini-ini-dev, golang-dbus-dev | golang-github-godbus-dbus-dev, golang-goprotobuf-dev | golang-github-golang-protobuf-dev, golang-github-gorhill-cronexpr-dev, golang-github-hashicorp-consul-dev, golang-github-hashicorp-errwrap-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-cleanhttp-dev, golang-github-hashicorp-go-getter-dev, golang-github-hashicorp-go-immutable-radix-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-multierror-dev, golang-github-hashicorp-go-plugin-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-go-version-dev, golang-github-hashicorp-golang-lru-dev, golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev, golang-github-hashicorp-net-rpc-msgpackrpc-dev (>= 0.0~git20151116~), golang-github-hashicorp-raft-dev (>= 0.0~git20160317~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev, golang-github-hashicorp-yamux-dev, golang-github-jmespath-go-jmespath-dev, golang-github-kardianos-osext-dev, golang-github-mattn-go-isatty-dev, golang-protobuf-extensions-dev | golang-github-matttproud-protobuf-extensions-dev, golang-github-mitchellh-cli-dev (>= 0.0~git20160203~), golang-github-mitchellh-copystructure-dev, golang-github-mitchellh-hashstructure-dev, golang-github-mitchellh-mapstructure-dev, golang-github-mitchellh-reflectwalk-dev, golang-github-shirou-gopsutil-dev, golang-github-opencontainers-runc-dev, golang-prometheus-client-dev | golang-github-prometheus-client-dev, golang-github-prometheus-client-model-dev, golang-github-prometheus-common-dev, golang-github-ryanuber-columnize-dev (>= 2.1.0~), golang-github-ugorji-go-codec-dev, golang-golang-x-sys-dev
Filtered Build-Depends: debhelper (>= 9), dh-systemd, dh-golang, golang-go, golang-github-armon-go-metrics-dev, golang-github-armon-go-radix-dev, golang-github-boltdb-bolt-dev, golang-github-coreos-go-systemd-dev, golang-github-datadog-datadog-go-dev, golang-github-docker-docker-dev, golang-github-docker-go-units-dev, golang-github-dustin-go-humanize-dev, golang-github-fsouza-go-dockerclient-dev (>= 0.0+git20160316~), golang-github-go-ini-ini-dev, golang-dbus-dev, golang-goprotobuf-dev, golang-github-gorhill-cronexpr-dev, golang-github-hashicorp-consul-dev, golang-github-hashicorp-errwrap-dev, golang-github-hashicorp-go-checkpoint-dev, golang-github-hashicorp-go-cleanhttp-dev, golang-github-hashicorp-go-getter-dev, golang-github-hashicorp-go-immutable-radix-dev, golang-github-hashicorp-go-memdb-dev, golang-github-hashicorp-go-msgpack-dev, golang-github-hashicorp-go-multierror-dev, golang-github-hashicorp-go-plugin-dev, golang-github-hashicorp-go-syslog-dev, golang-github-hashicorp-go-version-dev, golang-github-hashicorp-golang-lru-dev, golang-github-hashicorp-hcl-dev, golang-github-hashicorp-logutils-dev, golang-github-hashicorp-memberlist-dev, golang-github-hashicorp-net-rpc-msgpackrpc-dev (>= 0.0~git20151116~), golang-github-hashicorp-raft-dev (>= 0.0~git20160317~), golang-github-hashicorp-raft-boltdb-dev, golang-github-hashicorp-scada-client-dev, golang-github-hashicorp-serf-dev, golang-github-hashicorp-yamux-dev, golang-github-jmespath-go-jmespath-dev, golang-github-kardianos-osext-dev, golang-github-mattn-go-isatty-dev, golang-protobuf-extensions-dev, golang-github-mitchellh-cli-dev (>= 0.0~git20160203~), golang-github-mitchellh-copystructure-dev, golang-github-mitchellh-hashstructure-dev, golang-github-mitchellh-mapstructure-dev, golang-github-mitchellh-reflectwalk-dev, golang-github-shirou-gopsutil-dev, golang-github-opencontainers-runc-dev, golang-prometheus-client-dev, golang-github-prometheus-client-model-dev, golang-github-prometheus-common-dev, golang-github-ryanuber-columnize-dev (>= 2.1.0~), golang-github-ugorji-go-codec-dev, golang-golang-x-sys-dev
dpkg-deb: building package 'sbuild-build-depends-nomad-dummy' in '/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive/sbuild-build-depends-nomad-dummy.deb'.
OK
Get:1 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ InRelease
Ign:1 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ InRelease
Get:2 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Release [2119 B]
Get:2 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Release [2119 B]
Get:3 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Release.gpg [299 B]
Get:3 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Release.gpg [299 B]
Get:4 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Sources [753 B]
Get:5 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ Packages [1057 B]
Reading package lists...
W: No sandbox user '_apt' on the system, can not drop privileges
Reading package lists...

+------------------------------------------------------------------------------+
| Install nomad build dependencies (apt-based resolver)                        |
+------------------------------------------------------------------------------+

Installing build dependencies
Reading package lists...
Building dependency tree...
Reading state information...
The following additional packages will be installed:
  autoconf automake autopoint autotools-dev bsdmainutils ca-certificates
  debhelper dh-autoreconf dh-golang dh-strip-nondeterminism dh-systemd file
  gettext gettext-base golang-check.v1-dev golang-codegangsta-cli-dev
  golang-context-dev golang-dbus-dev golang-dns-dev
  golang-github-agtorre-gocolorize-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-armon-gomdb-dev golang-github-aws-aws-sdk-go-dev
  golang-github-bgentry-speakeasy-dev golang-github-bitly-go-simplejson-dev
  golang-github-bmizerany-assert-dev golang-github-boltdb-bolt-dev
  golang-github-bradfitz-gomemcache-dev golang-github-bugsnag-bugsnag-go-dev
  golang-github-bugsnag-panicwrap-dev golang-github-codegangsta-cli-dev
  golang-github-coreos-go-systemd-dev golang-github-datadog-datadog-go-dev
  golang-github-docker-docker-dev golang-github-docker-go-units-dev
  golang-github-dustin-go-humanize-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-garyburd-redigo-dev
  golang-github-getsentry-raven-go-dev golang-github-go-fsnotify-fsnotify-dev
  golang-github-go-ini-ini-dev golang-github-gorhill-cronexpr-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-consul-dev
  golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-getter-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-plugin-dev golang-github-hashicorp-go-reap-dev
  golang-github-hashicorp-go-syslog-dev golang-github-hashicorp-go-version-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev
  golang-github-jacobsa-oglematchers-dev golang-github-jacobsa-oglemock-dev
  golang-github-jacobsa-ogletest-dev golang-github-jacobsa-reqtrace-dev
  golang-github-jmespath-go-jmespath-dev golang-github-juju-loggo-dev
  golang-github-julienschmidt-httprouter-dev golang-github-kardianos-osext-dev
  golang-github-lsegal-gucumber-dev golang-github-mattn-go-isatty-dev
  golang-github-mitchellh-cli-dev golang-github-mitchellh-copystructure-dev
  golang-github-mitchellh-hashstructure-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-mitchellh-reflectwalk-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-client-model-dev
  golang-github-prometheus-common-dev golang-github-revel-revel-dev
  golang-github-robfig-config-dev golang-github-robfig-pathtree-dev
  golang-github-ryanuber-columnize-dev
  golang-github-seccomp-libseccomp-golang-dev
  golang-github-shiena-ansicolor-dev golang-github-shirou-gopsutil-dev
  golang-github-sirupsen-logrus-dev golang-github-smartystreets-goconvey-dev
  golang-github-stretchr-testify-dev golang-github-stvp-go-udp-testing-dev
  golang-github-tobi-airbrake-go-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vaughan0-go-ini-dev
  golang-github-vishvananda-netlink-dev golang-github-vishvananda-netns-dev
  golang-github-xeipuuv-gojsonpointer-dev
  golang-github-xeipuuv-gojsonreference-dev
  golang-github-xeipuuv-gojsonschema-dev golang-go golang-go.net-dev
  golang-gocapability-dev golang-golang-x-crypto-dev golang-golang-x-net-dev
  golang-golang-x-sys-dev golang-golang-x-tools golang-golang-x-tools-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-objx-dev golang-pretty-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-text-dev golang-x-text-dev golang-yaml.v2-dev groff-base
  intltool-debian libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 libjs-jquery
  libjs-jquery-ui liblmdb-dev liblmdb0 libmagic1 libpipeline1 libprotobuf9v5
  libprotoc9v5 libsasl2-2 libsasl2-dev libsasl2-modules-db libseccomp-dev
  libseccomp2 libsigsegv2 libssl1.0.2 libsystemd-dev libsystemd0
  libtimedate-perl libtool libunistring0 libxml2 m4 man-db openssl pkg-config
  po-debconf protobuf-compiler systemd
Suggested packages:
  autoconf-archive gnu-standards autoconf-doc wamerican | wordlist whois
  vacation dh-make augeas-tools gettext-doc libasprintf-dev libgettextpo-dev
  bzr git mercurial subversion groff libjs-jquery-ui-docs seccomp libtool-doc
  gfortran | fortran95-compiler gcj-jdk less www-browser libmail-box-perl
  systemd-ui systemd-container
Recommended packages:
  curl | wget | lynx-cur libglib2.0-data shared-mime-info xdg-user-dirs
  javascript-common lmdb-doc libsasl2-modules libltdl-dev xml-core
  libmail-sendmail-perl libpam-systemd dbus
The following NEW packages will be installed:
  autoconf automake autopoint autotools-dev bsdmainutils ca-certificates
  debhelper dh-autoreconf dh-golang dh-strip-nondeterminism dh-systemd file
  gettext gettext-base golang-check.v1-dev golang-codegangsta-cli-dev
  golang-context-dev golang-dbus-dev golang-dns-dev
  golang-github-agtorre-gocolorize-dev golang-github-armon-circbuf-dev
  golang-github-armon-go-metrics-dev golang-github-armon-go-radix-dev
  golang-github-armon-gomdb-dev golang-github-aws-aws-sdk-go-dev
  golang-github-bgentry-speakeasy-dev golang-github-bitly-go-simplejson-dev
  golang-github-bmizerany-assert-dev golang-github-boltdb-bolt-dev
  golang-github-bradfitz-gomemcache-dev golang-github-bugsnag-bugsnag-go-dev
  golang-github-bugsnag-panicwrap-dev golang-github-codegangsta-cli-dev
  golang-github-coreos-go-systemd-dev golang-github-datadog-datadog-go-dev
  golang-github-docker-docker-dev golang-github-docker-go-units-dev
  golang-github-dustin-go-humanize-dev
  golang-github-elazarl-go-bindata-assetfs-dev
  golang-github-fsouza-go-dockerclient-dev golang-github-garyburd-redigo-dev
  golang-github-getsentry-raven-go-dev golang-github-go-fsnotify-fsnotify-dev
  golang-github-go-ini-ini-dev golang-github-gorhill-cronexpr-dev
  golang-github-gorilla-mux-dev golang-github-hashicorp-consul-dev
  golang-github-hashicorp-errwrap-dev
  golang-github-hashicorp-go-checkpoint-dev
  golang-github-hashicorp-go-cleanhttp-dev
  golang-github-hashicorp-go-getter-dev
  golang-github-hashicorp-go-immutable-radix-dev
  golang-github-hashicorp-go-memdb-dev golang-github-hashicorp-go-msgpack-dev
  golang-github-hashicorp-go-multierror-dev
  golang-github-hashicorp-go-plugin-dev golang-github-hashicorp-go-reap-dev
  golang-github-hashicorp-go-syslog-dev golang-github-hashicorp-go-version-dev
  golang-github-hashicorp-golang-lru-dev golang-github-hashicorp-hcl-dev
  golang-github-hashicorp-logutils-dev golang-github-hashicorp-mdns-dev
  golang-github-hashicorp-memberlist-dev
  golang-github-hashicorp-net-rpc-msgpackrpc-dev
  golang-github-hashicorp-raft-boltdb-dev golang-github-hashicorp-raft-dev
  golang-github-hashicorp-scada-client-dev golang-github-hashicorp-serf-dev
  golang-github-hashicorp-uuid-dev golang-github-hashicorp-yamux-dev
  golang-github-inconshreveable-muxado-dev
  golang-github-jacobsa-oglematchers-dev golang-github-jacobsa-oglemock-dev
  golang-github-jacobsa-ogletest-dev golang-github-jacobsa-reqtrace-dev
  golang-github-jmespath-go-jmespath-dev golang-github-juju-loggo-dev
  golang-github-julienschmidt-httprouter-dev golang-github-kardianos-osext-dev
  golang-github-lsegal-gucumber-dev golang-github-mattn-go-isatty-dev
  golang-github-mitchellh-cli-dev golang-github-mitchellh-copystructure-dev
  golang-github-mitchellh-hashstructure-dev
  golang-github-mitchellh-mapstructure-dev
  golang-github-mitchellh-reflectwalk-dev
  golang-github-opencontainers-runc-dev golang-github-opencontainers-specs-dev
  golang-github-prometheus-client-model-dev
  golang-github-prometheus-common-dev golang-github-revel-revel-dev
  golang-github-robfig-config-dev golang-github-robfig-pathtree-dev
  golang-github-ryanuber-columnize-dev
  golang-github-seccomp-libseccomp-golang-dev
  golang-github-shiena-ansicolor-dev golang-github-shirou-gopsutil-dev
  golang-github-sirupsen-logrus-dev golang-github-smartystreets-goconvey-dev
  golang-github-stretchr-testify-dev golang-github-stvp-go-udp-testing-dev
  golang-github-tobi-airbrake-go-dev golang-github-ugorji-go-codec-dev
  golang-github-ugorji-go-msgpack-dev golang-github-vaughan0-go-ini-dev
  golang-github-vishvananda-netlink-dev golang-github-vishvananda-netns-dev
  golang-github-xeipuuv-gojsonpointer-dev
  golang-github-xeipuuv-gojsonreference-dev
  golang-github-xeipuuv-gojsonschema-dev golang-go golang-go.net-dev
  golang-gocapability-dev golang-golang-x-crypto-dev golang-golang-x-net-dev
  golang-golang-x-sys-dev golang-golang-x-tools golang-golang-x-tools-dev
  golang-gopkg-mgo.v2-dev golang-gopkg-tomb.v2-dev
  golang-gopkg-vmihailenco-msgpack.v2-dev golang-goprotobuf-dev
  golang-logrus-dev golang-objx-dev golang-pretty-dev golang-procfs-dev
  golang-prometheus-client-dev golang-protobuf-extensions-dev golang-src
  golang-text-dev golang-x-text-dev golang-yaml.v2-dev groff-base
  intltool-debian libarchive-zip-perl libbsd0 libcroco3 libffi6
  libfile-stripnondeterminism-perl libglib2.0-0 libicu55 libjs-jquery
  libjs-jquery-ui liblmdb-dev liblmdb0 libmagic1 libpipeline1 libprotobuf9v5
  libprotoc9v5 libsasl2-2 libsasl2-dev libsasl2-modules-db libseccomp-dev
  libsigsegv2 libssl1.0.2 libsystemd-dev libtimedate-perl libtool
  libunistring0 libxml2 m4 man-db openssl pkg-config po-debconf
  protobuf-compiler sbuild-build-depends-nomad-dummy
The following packages will be upgraded:
  libseccomp2 libsystemd0 systemd
3 upgraded, 168 newly installed, 0 to remove and 54 not upgraded.
Need to get 67.5 MB/76.6 MB of archives.
After this operation, 420 MB of additional disk space will be used.
Get:1 file:/<<BUILDDIR>>/resolver-vGx7Tf/apt_archive ./ sbuild-build-depends-nomad-dummy 0.invalid.0 [1286 B]
Get:2 http://172.17.0.1/private stretch-staging/main armhf libsystemd0 armhf 229-4 [237 kB]
Get:3 http://172.17.0.1/private stretch-staging/main armhf systemd armhf 229-4 [3151 kB]
Get:4 http://172.17.0.1/private stretch-staging/main armhf libseccomp2 armhf 2.3.0-1 [30.9 kB]
Get:5 http://172.17.0.1/private stretch-staging/main armhf groff-base armhf 1.22.3-7 [1083 kB]
Get:6 http://172.17.0.1/private stretch-staging/main armhf libbsd0 armhf 0.8.2-1 [88.0 kB]
Get:7 http://172.17.0.1/private stretch-staging/main armhf bsdmainutils armhf 9.0.10 [177 kB]
Get:8 http://172.17.0.1/private stretch-staging/main armhf libpipeline1 armhf 1.4.1-2 [23.7 kB]
Get:9 http://172.17.0.1/private stretch-staging/main armhf man-db armhf 2.7.5-1 [975 kB]
Get:10 http://172.17.0.1/private stretch-staging/main armhf liblmdb0 armhf 0.9.18-1 [37.8 kB]
Get:11 http://172.17.0.1/private stretch-staging/main armhf liblmdb-dev armhf 0.9.18-1 [54.1 kB]
Get:12 http://172.17.0.1/private stretch-staging/main armhf libunistring0 armhf 0.9.3-5.2 [253 kB]
Get:13 http://172.17.0.1/private stretch-staging/main armhf libmagic1 armhf 1:5.25-2 [250 kB]
Get:14 http://172.17.0.1/private stretch-staging/main armhf file armhf 1:5.25-2 [61.2 kB]
Get:15 http://172.17.0.1/private stretch-staging/main armhf gettext-base armhf 0.19.7-2 [111 kB]
Get:16 http://172.17.0.1/private stretch-staging/main armhf libsasl2-modules-db armhf 2.1.26.dfsg1-15 [65.6 kB]
Get:17 http://172.17.0.1/private stretch-staging/main armhf libsasl2-2 armhf 2.1.26.dfsg1-15 [96.7 kB]
Get:18 http://172.17.0.1/private stretch-staging/main armhf libsigsegv2 armhf 2.10-5 [28.4 kB]
Get:19 http://172.17.0.1/private stretch-staging/main armhf m4 armhf 1.4.17-5 [239 kB]
Get:20 http://172.17.0.1/private stretch-staging/main armhf autoconf all 2.69-10 [338 kB]
Get:21 http://172.17.0.1/private stretch-staging/main armhf autotools-dev all 20150820.1 [71.7 kB]
Get:22 http://172.17.0.1/private stretch-staging/main armhf automake all 1:1.15-4 [735 kB]
Get:23 http://172.17.0.1/private stretch-staging/main armhf autopoint all 0.19.7-2 [424 kB]
Get:24 http://172.17.0.1/private stretch-staging/main armhf openssl armhf 1.0.2g-1 [666 kB]
Get:25 http://172.17.0.1/private stretch-staging/main armhf ca-certificates all 20160104 [200 kB]
Get:26 http://172.17.0.1/private stretch-staging/main armhf libglib2.0-0 armhf 2.48.0-1 [2540 kB]
Get:27 http://172.17.0.1/private stretch-staging/main armhf libcroco3 armhf 0.6.11-1 [131 kB]
Get:28 http://172.17.0.1/private stretch-staging/main armhf gettext armhf 0.19.7-2 [1400 kB]
Get:29 http://172.17.0.1/private stretch-staging/main armhf intltool-debian all 0.35.0+20060710.4 [26.3 kB]
Get:30 http://172.17.0.1/private stretch-staging/main armhf po-debconf all 1.0.19 [249 kB]
Get:31 http://172.17.0.1/private stretch-staging/main armhf libarchive-zip-perl all 1.57-1 [95.1 kB]
Get:32 http://172.17.0.1/private stretch-staging/main armhf libfile-stripnondeterminism-perl all 0.016-1 [11.9 kB]
Get:33 http://172.17.0.1/private stretch-staging/main armhf libtimedate-perl all 2.3000-2 [42.2 kB]
Get:34 http://172.17.0.1/private stretch-staging/main armhf dh-strip-nondeterminism all 0.016-1 [6998 B]
Get:35 http://172.17.0.1/private stretch-staging/main armhf libtool all 2.4.6-0.1 [200 kB]
Get:36 http://172.17.0.1/private stretch-staging/main armhf dh-autoreconf all 12 [15.8 kB]
Get:37 http://172.17.0.1/private stretch-staging/main armhf debhelper all 9.20160403 [800 kB]
Get:38 http://172.17.0.1/private stretch-staging/main armhf golang-src armhf 2:1.6-1+rpi1 [6782 kB]
Get:39 http://172.17.0.1/private stretch-staging/main armhf golang-go armhf 2:1.6-1+rpi1 [22.2 MB]
Get:40 http://172.17.0.1/private stretch-staging/main armhf golang-text-dev all 0.0~git20130502-1 [6246 B]
Get:41 http://172.17.0.1/private stretch-staging/main armhf golang-pretty-dev all 0.0~git20130613-1 [7220 B]
Get:42 http://172.17.0.1/private stretch-staging/main armhf golang-github-bmizerany-assert-dev all 0.0~git20120716-1 [3658 B]
Get:43 http://172.17.0.1/private stretch-staging/main armhf golang-github-bitly-go-simplejson-dev all 0.5.0-1 [6916 B]
Get:44 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-docker-dev all 1.8.3~ds1-2 [223 kB]
Get:45 http://172.17.0.1/private stretch-staging/main armhf golang-github-mattn-go-isatty-dev all 0.0.1-1 [3456 B]
Get:46 http://172.17.0.1/private stretch-staging/main armhf libjs-jquery all 1.11.3+dfsg-4 [163 kB]
Get:47 http://172.17.0.1/private stretch-staging/main armhf libjs-jquery-ui all 1.10.1+dfsg-1 [499 kB]
Get:48 http://172.17.0.1/private stretch-staging/main armhf libprotobuf9v5 armhf 2.6.1-1.3 [292 kB]
Get:49 http://172.17.0.1/private stretch-staging/main armhf libprotoc9v5 armhf 2.6.1-1.3 [241 kB]
Get:50 http://172.17.0.1/private stretch-staging/main armhf libsasl2-dev armhf 2.1.26.dfsg1-15 [293 kB]
Get:51 http://172.17.0.1/private stretch-staging/main armhf libseccomp-dev armhf 2.3.0-1 [55.1 kB]
Get:52 http://172.17.0.1/private stretch-staging/main armhf libsystemd-dev armhf 229-4 [212 kB]
Get:53 http://172.17.0.1/private stretch-staging/main armhf protobuf-compiler armhf 2.6.1-1.3 [35.8 kB]
Get:54 http://172.17.0.1/private stretch-staging/main armhf dh-golang all 1.12 [9402 B]
Get:55 http://172.17.0.1/private stretch-staging/main armhf dh-systemd all 1.29 [20.1 kB]
Get:56 http://172.17.0.1/private stretch-staging/main armhf golang-check.v1-dev all 0.0+git20150729.11d3bc7-3 [29.1 kB]
Get:57 http://172.17.0.1/private stretch-staging/main armhf golang-github-codegangsta-cli-dev all 0.0~git20151221-1 [21.0 kB]
Get:58 http://172.17.0.1/private stretch-staging/main armhf golang-codegangsta-cli-dev all 0.0~git20151221-1 [2438 B]
Get:59 http://172.17.0.1/private stretch-staging/main armhf golang-context-dev all 0.0~git20140604.1.14f550f-1 [6280 B]
Get:60 http://172.17.0.1/private stretch-staging/main armhf golang-dbus-dev all 3-1 [39.4 kB]
Get:61 http://172.17.0.1/private stretch-staging/main armhf golang-dns-dev all 0.0~git20151030.0.6a15566-1 [128 kB]
Get:62 http://172.17.0.1/private stretch-staging/main armhf golang-github-agtorre-gocolorize-dev all 1.0.0-1 [7020 B]
Get:63 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-circbuf-dev all 0.0~git20150827.0.bbbad09-1 [3650 B]
Get:64 http://172.17.0.1/private stretch-staging/main armhf golang-github-julienschmidt-httprouter-dev all 1.1-1 [15.5 kB]
Get:65 http://172.17.0.1/private stretch-staging/main armhf golang-x-text-dev all 0+git20151217.cf49866-1 [2096 kB]
Get:66 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-crypto-dev all 1:0.0~git20151201.0.7b85b09-2 [802 kB]
Get:67 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-net-dev all 1:0.0+git20160110.4fd4a9f-1 [514 kB]
Get:68 http://172.17.0.1/private stretch-staging/main armhf golang-goprotobuf-dev armhf 0.0~git20150526-2 [700 kB]
Get:69 http://172.17.0.1/private stretch-staging/main armhf golang-github-kardianos-osext-dev all 0.0~git20151124.0.10da294-2 [6380 B]
Get:70 http://172.17.0.1/private stretch-staging/main armhf golang-github-bugsnag-panicwrap-dev all 1.1.0-1 [9836 B]
Get:71 http://172.17.0.1/private stretch-staging/main armhf golang-github-juju-loggo-dev all 0.0~git20150527.0.8477fc9-1 [16.4 kB]
Get:72 http://172.17.0.1/private stretch-staging/main armhf golang-github-bradfitz-gomemcache-dev all 0.0~git20141109-1 [9988 B]
Get:73 http://172.17.0.1/private stretch-staging/main armhf golang-github-garyburd-redigo-dev all 0.0~git20150901.0.d8dbe4d-1 [27.8 kB]
Get:74 http://172.17.0.1/private stretch-staging/main armhf golang-github-go-fsnotify-fsnotify-dev all 1.2.9-1 [24.1 kB]
Get:75 http://172.17.0.1/private stretch-staging/main armhf golang-github-robfig-config-dev all 0.0~git20141208-1 [15.2 kB]
Get:76 http://172.17.0.1/private stretch-staging/main armhf golang-github-robfig-pathtree-dev all 0.0~git20140121-1 [5688 B]
Get:77 http://172.17.0.1/private stretch-staging/main armhf golang-github-revel-revel-dev all 0.12.0+dfsg-1 [68.8 kB]
Get:78 http://172.17.0.1/private stretch-staging/main armhf golang-github-bugsnag-bugsnag-go-dev all 1.0.5+dfsg-1 [28.1 kB]
Get:79 http://172.17.0.1/private stretch-staging/main armhf golang-github-getsentry-raven-go-dev all 0.0~git20150721.0.74c334d-1 [14.4 kB]
Get:80 http://172.17.0.1/private stretch-staging/main armhf golang-objx-dev all 0.0~git20140527-4 [20.1 kB]
Get:81 http://172.17.0.1/private stretch-staging/main armhf golang-github-stretchr-testify-dev all 1.0-2 [27.8 kB]
Get:82 http://172.17.0.1/private stretch-staging/main armhf golang-github-stvp-go-udp-testing-dev all 0.0~git20150316.0.abcd331-1 [3654 B]
Get:83 http://172.17.0.1/private stretch-staging/main armhf golang-github-tobi-airbrake-go-dev all 0.0~git20150109-1 [5960 B]
Get:84 http://172.17.0.1/private stretch-staging/main armhf golang-github-sirupsen-logrus-dev all 0.8.7-3 [26.3 kB]
Get:85 http://172.17.0.1/private stretch-staging/main armhf golang-logrus-dev all 0.8.7-3 [3014 B]
Get:86 http://172.17.0.1/private stretch-staging/main armhf golang-protobuf-extensions-dev all 0+git20150513.fc2b8d3-4 [8694 B]
Get:87 http://172.17.0.1/private stretch-staging/main armhf golang-yaml.v2-dev all 0.0+git20160301.0.a83829b-1 [52.3 kB]
Get:88 http://172.17.0.1/private stretch-staging/main armhf golang-github-prometheus-common-dev all 0+git20160321.4045694-1 [52.0 kB]
Get:89 http://172.17.0.1/private stretch-staging/main armhf golang-procfs-dev all 0+git20150616.c91d8ee-1 [13.5 kB]
Get:90 http://172.17.0.1/private stretch-staging/main armhf golang-prometheus-client-dev all 0.7.0+ds-3 [88.4 kB]
Get:91 http://172.17.0.1/private stretch-staging/main armhf golang-github-datadog-datadog-go-dev all 0.0~git20150930.0.b050cd8-1 [7034 B]
Get:92 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-metrics-dev all 0.0~git20151207.0.06b6099-1 [13.0 kB]
Get:93 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-go-radix-dev all 0.0~git20150602.0.fbd82e8-1 [6472 B]
Get:94 http://172.17.0.1/private stretch-staging/main armhf golang-github-armon-gomdb-dev all 0.0~git20150106.0.151f2e0-1 [7438 B]
Get:95 http://172.17.0.1/private stretch-staging/main armhf golang-github-go-ini-ini-dev all 1.8.6-2 [20.7 kB]
Get:96 http://172.17.0.1/private stretch-staging/main armhf golang-github-jmespath-go-jmespath-dev all 0.2.2-1 [18.8 kB]
Get:97 http://172.17.0.1/private stretch-staging/main armhf golang-github-shiena-ansicolor-dev all 0.0~git20151119.0.a422bbe-1 [8142 B]
Get:98 http://172.17.0.1/private stretch-staging/main armhf golang-github-lsegal-gucumber-dev all 0.0~git20160110.0.44a4d7e-1 [15.2 kB]
Get:99 http://172.17.0.1/private stretch-staging/main armhf golang-github-jacobsa-oglematchers-dev all 0.0~git20150320-1 [30.2 kB]
Get:100 http://172.17.0.1/private stretch-staging/main armhf golang-github-jacobsa-oglemock-dev all 0.0~git20150428-2 [24.8 kB]
Get:101 http://172.17.0.1/private stretch-staging/main armhf golang-go.net-dev all 1:0.0+git20160110.4fd4a9f-1 [9982 B]
Get:102 http://172.17.0.1/private stretch-staging/main armhf golang-github-jacobsa-reqtrace-dev all 0.0~git20150505-2 [4830 B]
Get:103 http://172.17.0.1/private stretch-staging/main armhf golang-github-jacobsa-ogletest-dev all 0.0~git20150610-4 [21.4 kB]
Get:104 http://172.17.0.1/private stretch-staging/main armhf golang-github-smartystreets-goconvey-dev all 1.5.0-1 [51.9 kB]
Get:105 http://172.17.0.1/private stretch-staging/main armhf golang-github-vaughan0-go-ini-dev all 0.0~git20130923.0.a98ad7e-1 [4550 B]
Get:106 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-tools-dev all 1:0.0~git20160315.0.f42ec61-2 [1372 kB]
Get:107 http://172.17.0.1/private stretch-staging/main armhf golang-github-aws-aws-sdk-go-dev all 1.1.14+dfsg-1 [1067 kB]
Get:108 http://172.17.0.1/private stretch-staging/main armhf golang-github-bgentry-speakeasy-dev all 0.0~git20150902.0.36e9cfd-1 [4632 B]
Get:109 http://172.17.0.1/private stretch-staging/main armhf golang-github-boltdb-bolt-dev all 1.2.0-1 [56.7 kB]
Get:110 http://172.17.0.1/private stretch-staging/main armhf golang-github-coreos-go-systemd-dev all 5-1 [31.8 kB]
Get:111 http://172.17.0.1/private stretch-staging/main armhf golang-github-docker-go-units-dev all 0.3.0-1 [11.8 kB]
Get:112 http://172.17.0.1/private stretch-staging/main armhf golang-github-dustin-go-humanize-dev all 0.0~git20151125.0.8929fe9-1 [12.4 kB]
Get:113 http://172.17.0.1/private stretch-staging/main armhf golang-github-elazarl-go-bindata-assetfs-dev all 0.0~git20151224.0.57eb5e1-1 [5088 B]
Get:114 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonpointer-dev all 0.0~git20151027.0.e0fe6f6-1 [4418 B]
Get:115 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonreference-dev all 0.0~git20150808.0.e02fc20-1 [4424 B]
Get:116 http://172.17.0.1/private stretch-staging/main armhf golang-github-xeipuuv-gojsonschema-dev all 0.0~git20160323.0.93e72a7-1 [23.5 kB]
Get:117 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-specs-dev all 0.4.0-1 [11.5 kB]
Get:118 http://172.17.0.1/private stretch-staging/main armhf golang-github-seccomp-libseccomp-golang-dev all 0.0~git20150813.0.1b506fc-1 [12.9 kB]
Get:119 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netns-dev all 0.0~git20150710.0.604eaf1-1 [5448 B]
Get:120 http://172.17.0.1/private stretch-staging/main armhf golang-github-vishvananda-netlink-dev all 0.0~git20160306.0.4fdf23c-1 [50.5 kB]
Get:121 http://172.17.0.1/private stretch-staging/main armhf golang-gocapability-dev all 0.0~git20150506.1.66ef2aa-1 [10.8 kB]
Get:122 http://172.17.0.1/private stretch-staging/main armhf golang-github-opencontainers-runc-dev all 0.0.8+dfsg-2 [115 kB]
Get:123 http://172.17.0.1/private stretch-staging/main armhf golang-github-gorilla-mux-dev all 0.0~git20150814.0.f7b6aaa-1 [25.0 kB]
Get:124 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-sys-dev all 0.0~git20150612-1 [171 kB]
Get:125 http://172.17.0.1/private stretch-staging/main armhf golang-github-fsouza-go-dockerclient-dev all 0.0+git20160316-1 [170 kB]
Get:126 http://172.17.0.1/private stretch-staging/main armhf golang-github-gorhill-cronexpr-dev all 1.0.0-1 [9494 B]
Get:127 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-cleanhttp-dev all 0.0~git20160217.0.875fb67-1 [8256 B]
Get:128 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-checkpoint-dev all 0.0~git20151022.0.e4b2dc3-1 [11.2 kB]
Get:129 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-golang-lru-dev all 0.0~git20160207.0.a0d98a5-1 [12.9 kB]
Get:130 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-uuid-dev all 0.0~git20160218.0.6994546-1 [7306 B]
Get:131 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-immutable-radix-dev all 0.0~git20160222.0.8e8ed81-1 [13.4 kB]
Get:132 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-memdb-dev all 0.0~git20160301.0.98f52f5-1 [19.1 kB]
Get:133 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-msgpack-dev all 0.0~git20130605.792643-1 [20.3 kB]
Get:134 http://172.17.0.1/private stretch-staging/main armhf golang-github-ugorji-go-codec-dev all 0.0~git20151130.0.357a44b-1 [127 kB]
Get:135 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-vmihailenco-msgpack.v2-dev all 2.4.11-1 [17.9 kB]
Get:136 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-tomb.v2-dev all 0.0~git20140626.14b3d72-1 [5140 B]
Get:137 http://172.17.0.1/private stretch-staging/main armhf golang-gopkg-mgo.v2-dev all 2015.12.06-1 [138 kB]
Get:138 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-msgpack-dev all 0.0~git20150518-1 [42.1 kB]
Get:139 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-reap-dev all 0.0~git20160113.0.2d85522-1 [9084 B]
Get:140 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-syslog-dev all 0.0~git20150218.0.42a2b57-1 [5336 B]
Get:141 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-hcl-dev all 0.0~git20151110.0.fa160f1-1 [42.7 kB]
Get:142 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-logutils-dev all 0.0~git20150609.0.0dc08b1-1 [8150 B]
Get:143 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-memberlist-dev all 0.0~git20160225.0.ae9a8d9-1 [48.7 kB]
Get:144 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-dev all 0.0~git20160317.0.3359516-1 [52.2 kB]
Get:145 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-raft-boltdb-dev all 0.0~git20150201.d1e82c1-1 [9744 B]
Get:146 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-errwrap-dev all 0.0~git20141028.0.7554cd9-1 [9692 B]
Get:147 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-multierror-dev all 0.0~git20150916.0.d30f099-1 [9274 B]
Get:148 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-net-rpc-msgpackrpc-dev all 0.0~git20151116.0.a14192a-1 [4168 B]
Get:149 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-yamux-dev all 0.0~git20151129.0.df94978-1 [20.0 kB]
Get:150 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-scada-client-dev all 0.0~git20150828.0.84989fd-1 [17.4 kB]
Get:151 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-mdns-dev all 0.0~git20150317.0.2b439d3-1 [10.9 kB]
Get:152 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-cli-dev all 0.0~git20160203.0.5c87c51-1 [16.9 kB]
Get:153 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-mapstructure-dev all 0.0~git20150717.0.281073e-2 [14.4 kB]
Get:154 http://172.17.0.1/private stretch-staging/main armhf golang-github-ryanuber-columnize-dev all 2.1.0-1 [5140 B]
Get:155 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-serf-dev all 0.7.0~ds1-1 [110 kB]
Get:156 http://172.17.0.1/private stretch-staging/main armhf golang-github-inconshreveable-muxado-dev all 0.0~git20140312.0.f693c7e-1 [26.4 kB]
Get:157 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-consul-dev all 0.6.3~dfsg-2 [557 kB]
Get:158 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-getter-dev all 0.0~git20160316.0.575ec4e-1 [27.9 kB]
Get:159 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-plugin-dev all 0.0~git20160212.0.cccb4a1-1 [21.9 kB]
Get:160 http://172.17.0.1/private stretch-staging/main armhf golang-github-hashicorp-go-version-dev all 0.0~git20150915.0.2b9865f-1 [10.8 kB]
Get:161 http://172.17.0.1/private stretch-staging/main armhf golang-golang-x-tools armhf 1:0.0~git20160315.0.f42ec61-2 [11.7 MB]
Get:162 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-reflectwalk-dev all 0.0~git20150527.0.eecf4c7-1 [5374 B]
Get:163 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-copystructure-dev all 0.0~git20160128.0.80adcec-1 [5098 B]
Get:164 http://172.17.0.1/private stretch-staging/main armhf golang-github-mitchellh-hashstructure-dev all 0.0~git20160209.0.6b17d66-1 [5960 B]
Get:165 http://172.17.0.1/private stretch-staging/main armhf golang-github-prometheus-client-model-dev all 0.0.2+git20150212.12.fa8ad6f-1 [9806 B]
Get:166 http://172.17.0.1/private stretch-staging/main armhf golang-github-shirou-gopsutil-dev all 1.0.0+git20160112-1 [51.0 kB]
debconf: delaying package configuration, since apt-utils is not installed
Fetched 67.5 MB in 9s (7319 kB/s)
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13786 files and directories currently installed.)
Preparing to unpack .../libsystemd0_229-4_armhf.deb ...
Unpacking libsystemd0:armhf (229-4) over (229-2) ...
Processing triggers for libc-bin (2.22-3) ...
Setting up libsystemd0:armhf (229-4) ...
Processing triggers for libc-bin (2.22-3) ...
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13786 files and directories currently installed.)
Preparing to unpack .../systemd_229-4_armhf.deb ...
Unpacking systemd (229-4) over (229-2) ...
Setting up systemd (229-4) ...
Adding group `systemd-journal' (GID 112) ...
Done.
Operation failed: No such file or directory
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13786 files and directories currently installed.)
Preparing to unpack .../libseccomp2_2.3.0-1_armhf.deb ...
Unpacking libseccomp2:armhf (2.3.0-1) over (2.2.3-3) ...
Processing triggers for libc-bin (2.22-3) ...
Setting up libseccomp2:armhf (2.3.0-1) ...
Processing triggers for libc-bin (2.22-3) ...
Selecting previously unselected package groff-base.
(Reading database ... 
(Reading database ... 5%
(Reading database ... 10%
(Reading database ... 15%
(Reading database ... 20%
(Reading database ... 25%
(Reading database ... 30%
(Reading database ... 35%
(Reading database ... 40%
(Reading database ... 45%
(Reading database ... 50%
(Reading database ... 55%
(Reading database ... 60%
(Reading database ... 65%
(Reading database ... 70%
(Reading database ... 75%
(Reading database ... 80%
(Reading database ... 85%
(Reading database ... 90%
(Reading database ... 95%
(Reading database ... 100%
(Reading database ... 13786 files and directories currently installed.)
Preparing to unpack .../groff-base_1.22.3-7_armhf.deb ...
Unpacking groff-base (1.22.3-7) ...
Selecting previously unselected package libbsd0:armhf.
Preparing to unpack .../libbsd0_0.8.2-1_armhf.deb ...
Unpacking libbsd0:armhf (0.8.2-1) ...
Selecting previously unselected package bsdmainutils.
Preparing to unpack .../bsdmainutils_9.0.10_armhf.deb ...
Unpacking bsdmainutils (9.0.10) ...
Selecting previously unselected package libpipeline1:armhf.
Preparing to unpack .../libpipeline1_1.4.1-2_armhf.deb ...
Unpacking libpipeline1:armhf (1.4.1-2) ...
Selecting previously unselected package man-db.
Preparing to unpack .../man-db_2.7.5-1_armhf.deb ...
Unpacking man-db (2.7.5-1) ...
Selecting previously unselected package liblmdb0:armhf.
Preparing to unpack .../liblmdb0_0.9.18-1_armhf.deb ...
Unpacking liblmdb0:armhf (0.9.18-1) ...
Selecting previously unselected package liblmdb-dev:armhf.
Preparing to unpack .../liblmdb-dev_0.9.18-1_armhf.deb ...
Unpacking liblmdb-dev:armhf (0.9.18-1) ...
Selecting previously unselected package libunistring0:armhf.
Preparing to unpack .../libunistring0_0.9.3-5.2_armhf.deb ...
Unpacking libunistring0:armhf (0.9.3-5.2) ...
Selecting previously unselected package libssl1.0.2:armhf.
Preparing to unpack .../libssl1.0.2_1.0.2g-1_armhf.deb ...
Unpacking libssl1.0.2:armhf (1.0.2g-1) ...
Selecting previously unselected package libmagic1:armhf.
Preparing to unpack .../libmagic1_1%3a5.25-2_armhf.deb ...
Unpacking libmagic1:armhf (1:5.25-2) ...
Selecting previously unselected package file.
Preparing to unpack .../file_1%3a5.25-2_armhf.deb ...
Unpacking file (1:5.25-2) ...
Selecting previously unselected package gettext-base.
Preparing to unpack .../gettext-base_0.19.7-2_armhf.deb ...
Unpacking gettext-base (0.19.7-2) ...
Selecting previously unselected package libsasl2-modules-db:armhf.
Preparing to unpack .../libsasl2-modules-db_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-modules-db:armhf (2.1.26.dfsg1-15) ...
Selecting previously unselected package libsasl2-2:armhf.
Preparing to unpack .../libsasl2-2_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-2:armhf (2.1.26.dfsg1-15) ...
Selecting previously unselected package libicu55:armhf.
Preparing to unpack .../libicu55_55.1-7_armhf.deb ...
Unpacking libicu55:armhf (55.1-7) ...
Selecting previously unselected package libxml2:armhf.
Preparing to unpack .../libxml2_2.9.3+dfsg1-1_armhf.deb ...
Unpacking libxml2:armhf (2.9.3+dfsg1-1) ...
Selecting previously unselected package libsigsegv2:armhf.
Preparing to unpack .../libsigsegv2_2.10-5_armhf.deb ...
Unpacking libsigsegv2:armhf (2.10-5) ...
Selecting previously unselected package m4.
Preparing to unpack .../archives/m4_1.4.17-5_armhf.deb ...
Unpacking m4 (1.4.17-5) ...
Selecting previously unselected package autoconf.
Preparing to unpack .../autoconf_2.69-10_all.deb ...
Unpacking autoconf (2.69-10) ...
Selecting previously unselected package autotools-dev.
Preparing to unpack .../autotools-dev_20150820.1_all.deb ...
Unpacking autotools-dev (20150820.1) ...
Selecting previously unselected package automake.
Preparing to unpack .../automake_1%3a1.15-4_all.deb ...
Unpacking automake (1:1.15-4) ...
Selecting previously unselected package autopoint.
Preparing to unpack .../autopoint_0.19.7-2_all.deb ...
Unpacking autopoint (0.19.7-2) ...
Selecting previously unselected package openssl.
Preparing to unpack .../openssl_1.0.2g-1_armhf.deb ...
Unpacking openssl (1.0.2g-1) ...
Selecting previously unselected package ca-certificates.
Preparing to unpack .../ca-certificates_20160104_all.deb ...
Unpacking ca-certificates (20160104) ...
Selecting previously unselected package libffi6:armhf.
Preparing to unpack .../libffi6_3.2.1-4_armhf.deb ...
Unpacking libffi6:armhf (3.2.1-4) ...
Selecting previously unselected package libglib2.0-0:armhf.
Preparing to unpack .../libglib2.0-0_2.48.0-1_armhf.deb ...
Unpacking libglib2.0-0:armhf (2.48.0-1) ...
Selecting previously unselected package libcroco3:armhf.
Preparing to unpack .../libcroco3_0.6.11-1_armhf.deb ...
Unpacking libcroco3:armhf (0.6.11-1) ...
Selecting previously unselected package gettext.
Preparing to unpack .../gettext_0.19.7-2_armhf.deb ...
Unpacking gettext (0.19.7-2) ...
Selecting previously unselected package intltool-debian.
Preparing to unpack .../intltool-debian_0.35.0+20060710.4_all.deb ...
Unpacking intltool-debian (0.35.0+20060710.4) ...
Selecting previously unselected package po-debconf.
Preparing to unpack .../po-debconf_1.0.19_all.deb ...
Unpacking po-debconf (1.0.19) ...
Selecting previously unselected package libarchive-zip-perl.
Preparing to unpack .../libarchive-zip-perl_1.57-1_all.deb ...
Unpacking libarchive-zip-perl (1.57-1) ...
Selecting previously unselected package libfile-stripnondeterminism-perl.
Preparing to unpack .../libfile-stripnondeterminism-perl_0.016-1_all.deb ...
Unpacking libfile-stripnondeterminism-perl (0.016-1) ...
Selecting previously unselected package libtimedate-perl.
Preparing to unpack .../libtimedate-perl_2.3000-2_all.deb ...
Unpacking libtimedate-perl (2.3000-2) ...
Selecting previously unselected package dh-strip-nondeterminism.
Preparing to unpack .../dh-strip-nondeterminism_0.016-1_all.deb ...
Unpacking dh-strip-nondeterminism (0.016-1) ...
Selecting previously unselected package libtool.
Preparing to unpack .../libtool_2.4.6-0.1_all.deb ...
Unpacking libtool (2.4.6-0.1) ...
Selecting previously unselected package dh-autoreconf.
Preparing to unpack .../dh-autoreconf_12_all.deb ...
Unpacking dh-autoreconf (12) ...
Selecting previously unselected package debhelper.
Preparing to unpack .../debhelper_9.20160403_all.deb ...
Unpacking debhelper (9.20160403) ...
Selecting previously unselected package golang-src.
Preparing to unpack .../golang-src_2%3a1.6-1+rpi1_armhf.deb ...
Unpacking golang-src (2:1.6-1+rpi1) ...
Selecting previously unselected package golang-go.
Preparing to unpack .../golang-go_2%3a1.6-1+rpi1_armhf.deb ...
Unpacking golang-go (2:1.6-1+rpi1) ...
Selecting previously unselected package golang-text-dev.
Preparing to unpack .../golang-text-dev_0.0~git20130502-1_all.deb ...
Unpacking golang-text-dev (0.0~git20130502-1) ...
Selecting previously unselected package golang-pretty-dev.
Preparing to unpack .../golang-pretty-dev_0.0~git20130613-1_all.deb ...
Unpacking golang-pretty-dev (0.0~git20130613-1) ...
Selecting previously unselected package golang-github-bmizerany-assert-dev.
Preparing to unpack .../golang-github-bmizerany-assert-dev_0.0~git20120716-1_all.deb ...
Unpacking golang-github-bmizerany-assert-dev (0.0~git20120716-1) ...
Selecting previously unselected package golang-github-bitly-go-simplejson-dev.
Preparing to unpack .../golang-github-bitly-go-simplejson-dev_0.5.0-1_all.deb ...
Unpacking golang-github-bitly-go-simplejson-dev (0.5.0-1) ...
Selecting previously unselected package golang-github-docker-docker-dev.
Preparing to unpack .../golang-github-docker-docker-dev_1.8.3~ds1-2_all.deb ...
Unpacking golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Selecting previously unselected package golang-github-mattn-go-isatty-dev.
Preparing to unpack .../golang-github-mattn-go-isatty-dev_0.0.1-1_all.deb ...
Unpacking golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Selecting previously unselected package libjs-jquery.
Preparing to unpack .../libjs-jquery_1.11.3+dfsg-4_all.deb ...
Unpacking libjs-jquery (1.11.3+dfsg-4) ...
Selecting previously unselected package libjs-jquery-ui.
Preparing to unpack .../libjs-jquery-ui_1.10.1+dfsg-1_all.deb ...
Unpacking libjs-jquery-ui (1.10.1+dfsg-1) ...
Selecting previously unselected package libprotobuf9v5:armhf.
Preparing to unpack .../libprotobuf9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotobuf9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libprotoc9v5:armhf.
Preparing to unpack .../libprotoc9v5_2.6.1-1.3_armhf.deb ...
Unpacking libprotoc9v5:armhf (2.6.1-1.3) ...
Selecting previously unselected package libsasl2-dev.
Preparing to unpack .../libsasl2-dev_2.1.26.dfsg1-15_armhf.deb ...
Unpacking libsasl2-dev (2.1.26.dfsg1-15) ...
Selecting previously unselected package libseccomp-dev:armhf.
Preparing to unpack .../libseccomp-dev_2.3.0-1_armhf.deb ...
Unpacking libseccomp-dev:armhf (2.3.0-1) ...
Selecting previously unselected package libsystemd-dev:armhf.
Preparing to unpack .../libsystemd-dev_229-4_armhf.deb ...
Unpacking libsystemd-dev:armhf (229-4) ...
Selecting previously unselected package pkg-config.
Preparing to unpack .../pkg-config_0.29-3_armhf.deb ...
Unpacking pkg-config (0.29-3) ...
Selecting previously unselected package protobuf-compiler.
Preparing to unpack .../protobuf-compiler_2.6.1-1.3_armhf.deb ...
Unpacking protobuf-compiler (2.6.1-1.3) ...
Selecting previously unselected package dh-golang.
Preparing to unpack .../dh-golang_1.12_all.deb ...
Unpacking dh-golang (1.12) ...
Selecting previously unselected package dh-systemd.
Preparing to unpack .../dh-systemd_1.29_all.deb ...
Unpacking dh-systemd (1.29) ...
Selecting previously unselected package golang-check.v1-dev.
Preparing to unpack .../golang-check.v1-dev_0.0+git20150729.11d3bc7-3_all.deb ...
Unpacking golang-check.v1-dev (0.0+git20150729.11d3bc7-3) ...
Selecting previously unselected package golang-github-codegangsta-cli-dev.
Preparing to unpack .../golang-github-codegangsta-cli-dev_0.0~git20151221-1_all.deb ...
Unpacking golang-github-codegangsta-cli-dev (0.0~git20151221-1) ...
Selecting previously unselected package golang-codegangsta-cli-dev.
Preparing to unpack .../golang-codegangsta-cli-dev_0.0~git20151221-1_all.deb ...
Unpacking golang-codegangsta-cli-dev (0.0~git20151221-1) ...
Selecting previously unselected package golang-context-dev.
Preparing to unpack .../golang-context-dev_0.0~git20140604.1.14f550f-1_all.deb ...
Unpacking golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Selecting previously unselected package golang-dbus-dev.
Preparing to unpack .../golang-dbus-dev_3-1_all.deb ...
Unpacking golang-dbus-dev (3-1) ...
Selecting previously unselected package golang-dns-dev.
Preparing to unpack .../golang-dns-dev_0.0~git20151030.0.6a15566-1_all.deb ...
Unpacking golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Selecting previously unselected package golang-github-agtorre-gocolorize-dev.
Preparing to unpack .../golang-github-agtorre-gocolorize-dev_1.0.0-1_all.deb ...
Unpacking golang-github-agtorre-gocolorize-dev (1.0.0-1) ...
Selecting previously unselected package golang-github-armon-circbuf-dev.
Preparing to unpack .../golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1_all.deb ...
Unpacking golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Selecting previously unselected package golang-github-julienschmidt-httprouter-dev.
Preparing to unpack .../golang-github-julienschmidt-httprouter-dev_1.1-1_all.deb ...
Unpacking golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Selecting previously unselected package golang-x-text-dev.
Preparing to unpack .../golang-x-text-dev_0+git20151217.cf49866-1_all.deb ...
Unpacking golang-x-text-dev (0+git20151217.cf49866-1) ...
Selecting previously unselected package golang-golang-x-crypto-dev.
Preparing to unpack .../golang-golang-x-crypto-dev_1%3a0.0~git20151201.0.7b85b09-2_all.deb ...
Unpacking golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Selecting previously unselected package golang-golang-x-net-dev.
Preparing to unpack .../golang-golang-x-net-dev_1%3a0.0+git20160110.4fd4a9f-1_all.deb ...
Unpacking golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Selecting previously unselected package golang-goprotobuf-dev.
Preparing to unpack .../golang-goprotobuf-dev_0.0~git20150526-2_armhf.deb ...
Unpacking golang-goprotobuf-dev (0.0~git20150526-2) ...
Selecting previously unselected package golang-github-kardianos-osext-dev.
Preparing to unpack .../golang-github-kardianos-osext-dev_0.0~git20151124.0.10da294-2_all.deb ...
Unpacking golang-github-kardianos-osext-dev (0.0~git20151124.0.10da294-2) ...
Selecting previously unselected package golang-github-bugsnag-panicwrap-dev.
Preparing to unpack .../golang-github-bugsnag-panicwrap-dev_1.1.0-1_all.deb ...
Unpacking golang-github-bugsnag-panicwrap-dev (1.1.0-1) ...
Selecting previously unselected package golang-github-juju-loggo-dev.
Preparing to unpack .../golang-github-juju-loggo-dev_0.0~git20150527.0.8477fc9-1_all.deb ...
Unpacking golang-github-juju-loggo-dev (0.0~git20150527.0.8477fc9-1) ...
Selecting previously unselected package golang-github-bradfitz-gomemcache-dev.
Preparing to unpack .../golang-github-bradfitz-gomemcache-dev_0.0~git20141109-1_all.deb ...
Unpacking golang-github-bradfitz-gomemcache-dev (0.0~git20141109-1) ...
Selecting previously unselected package golang-github-garyburd-redigo-dev.
Preparing to unpack .../golang-github-garyburd-redigo-dev_0.0~git20150901.0.d8dbe4d-1_all.deb ...
Unpacking golang-github-garyburd-redigo-dev (0.0~git20150901.0.d8dbe4d-1) ...
Selecting previously unselected package golang-github-go-fsnotify-fsnotify-dev.
Preparing to unpack .../golang-github-go-fsnotify-fsnotify-dev_1.2.9-1_all.deb ...
Unpacking golang-github-go-fsnotify-fsnotify-dev (1.2.9-1) ...
Selecting previously unselected package golang-github-robfig-config-dev.
Preparing to unpack .../golang-github-robfig-config-dev_0.0~git20141208-1_all.deb ...
Unpacking golang-github-robfig-config-dev (0.0~git20141208-1) ...
Selecting previously unselected package golang-github-robfig-pathtree-dev.
Preparing to unpack .../golang-github-robfig-pathtree-dev_0.0~git20140121-1_all.deb ...
Unpacking golang-github-robfig-pathtree-dev (0.0~git20140121-1) ...
Selecting previously unselected package golang-github-revel-revel-dev.
Preparing to unpack .../golang-github-revel-revel-dev_0.12.0+dfsg-1_all.deb ...
Unpacking golang-github-revel-revel-dev (0.12.0+dfsg-1) ...
Selecting previously unselected package golang-github-bugsnag-bugsnag-go-dev.
Preparing to unpack .../golang-github-bugsnag-bugsnag-go-dev_1.0.5+dfsg-1_all.deb ...
Unpacking golang-github-bugsnag-bugsnag-go-dev (1.0.5+dfsg-1) ...
Selecting previously unselected package golang-github-getsentry-raven-go-dev.
Preparing to unpack .../golang-github-getsentry-raven-go-dev_0.0~git20150721.0.74c334d-1_all.deb ...
Unpacking golang-github-getsentry-raven-go-dev (0.0~git20150721.0.74c334d-1) ...
Selecting previously unselected package golang-objx-dev.
Preparing to unpack .../golang-objx-dev_0.0~git20140527-4_all.deb ...
Unpacking golang-objx-dev (0.0~git20140527-4) ...
Selecting previously unselected package golang-github-stretchr-testify-dev.
Preparing to unpack .../golang-github-stretchr-testify-dev_1.0-2_all.deb ...
Unpacking golang-github-stretchr-testify-dev (1.0-2) ...
Selecting previously unselected package golang-github-stvp-go-udp-testing-dev.
Preparing to unpack .../golang-github-stvp-go-udp-testing-dev_0.0~git20150316.0.abcd331-1_all.deb ...
Unpacking golang-github-stvp-go-udp-testing-dev (0.0~git20150316.0.abcd331-1) ...
Selecting previously unselected package golang-github-tobi-airbrake-go-dev.
Preparing to unpack .../golang-github-tobi-airbrake-go-dev_0.0~git20150109-1_all.deb ...
Unpacking golang-github-tobi-airbrake-go-dev (0.0~git20150109-1) ...
Selecting previously unselected package golang-github-sirupsen-logrus-dev.
Preparing to unpack .../golang-github-sirupsen-logrus-dev_0.8.7-3_all.deb ...
Unpacking golang-github-sirupsen-logrus-dev (0.8.7-3) ...
Selecting previously unselected package golang-logrus-dev.
Preparing to unpack .../golang-logrus-dev_0.8.7-3_all.deb ...
Unpacking golang-logrus-dev (0.8.7-3) ...
Selecting previously unselected package golang-protobuf-extensions-dev.
Preparing to unpack .../golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4_all.deb ...
Unpacking golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Selecting previously unselected package golang-yaml.v2-dev.
Preparing to unpack .../golang-yaml.v2-dev_0.0+git20160301.0.a83829b-1_all.deb ...
Unpacking golang-yaml.v2-dev (0.0+git20160301.0.a83829b-1) ...
Selecting previously unselected package golang-github-prometheus-common-dev.
Preparing to unpack .../golang-github-prometheus-common-dev_0+git20160321.4045694-1_all.deb ...
Unpacking golang-github-prometheus-common-dev (0+git20160321.4045694-1) ...
Selecting previously unselected package golang-procfs-dev.
Preparing to unpack .../golang-procfs-dev_0+git20150616.c91d8ee-1_all.deb ...
Unpacking golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Selecting previously unselected package golang-prometheus-client-dev.
Preparing to unpack .../golang-prometheus-client-dev_0.7.0+ds-3_all.deb ...
Unpacking golang-prometheus-client-dev (0.7.0+ds-3) ...
Selecting previously unselected package golang-github-datadog-datadog-go-dev.
Preparing to unpack .../golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1_all.deb ...
Unpacking golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Selecting previously unselected package golang-github-armon-go-metrics-dev.
Preparing to unpack .../golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1_all.deb ...
Unpacking golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Selecting previously unselected package golang-github-armon-go-radix-dev.
Preparing to unpack .../golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1_all.deb ...
Unpacking golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Selecting previously unselected package golang-github-armon-gomdb-dev.
Preparing to unpack .../golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1_all.deb ...
Unpacking golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Selecting previously unselected package golang-github-go-ini-ini-dev.
Preparing to unpack .../golang-github-go-ini-ini-dev_1.8.6-2_all.deb ...
Unpacking golang-github-go-ini-ini-dev (1.8.6-2) ...
Selecting previously unselected package golang-github-jmespath-go-jmespath-dev.
Preparing to unpack .../golang-github-jmespath-go-jmespath-dev_0.2.2-1_all.deb ...
Unpacking golang-github-jmespath-go-jmespath-dev (0.2.2-1) ...
Selecting previously unselected package golang-github-shiena-ansicolor-dev.
Preparing to unpack .../golang-github-shiena-ansicolor-dev_0.0~git20151119.0.a422bbe-1_all.deb ...
Unpacking golang-github-shiena-ansicolor-dev (0.0~git20151119.0.a422bbe-1) ...
Selecting previously unselected package golang-github-lsegal-gucumber-dev.
Preparing to unpack .../golang-github-lsegal-gucumber-dev_0.0~git20160110.0.44a4d7e-1_all.deb ...
Unpacking golang-github-lsegal-gucumber-dev (0.0~git20160110.0.44a4d7e-1) ...
Selecting previously unselected package golang-github-jacobsa-oglematchers-dev.
Preparing to unpack .../golang-github-jacobsa-oglematchers-dev_0.0~git20150320-1_all.deb ...
Unpacking golang-github-jacobsa-oglematchers-dev (0.0~git20150320-1) ...
Selecting previously unselected package golang-github-jacobsa-oglemock-dev.
Preparing to unpack .../golang-github-jacobsa-oglemock-dev_0.0~git20150428-2_all.deb ...
Unpacking golang-github-jacobsa-oglemock-dev (0.0~git20150428-2) ...
Selecting previously unselected package golang-go.net-dev.
Preparing to unpack .../golang-go.net-dev_1%3a0.0+git20160110.4fd4a9f-1_all.deb ...
Unpacking golang-go.net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Selecting previously unselected package golang-github-jacobsa-reqtrace-dev.
Preparing to unpack .../golang-github-jacobsa-reqtrace-dev_0.0~git20150505-2_all.deb ...
Unpacking golang-github-jacobsa-reqtrace-dev (0.0~git20150505-2) ...
Selecting previously unselected package golang-github-jacobsa-ogletest-dev.
Preparing to unpack .../golang-github-jacobsa-ogletest-dev_0.0~git20150610-4_all.deb ...
Unpacking golang-github-jacobsa-ogletest-dev (0.0~git20150610-4) ...
Selecting previously unselected package golang-github-smartystreets-goconvey-dev.
Preparing to unpack .../golang-github-smartystreets-goconvey-dev_1.5.0-1_all.deb ...
Unpacking golang-github-smartystreets-goconvey-dev (1.5.0-1) ...
Selecting previously unselected package golang-github-vaughan0-go-ini-dev.
Preparing to unpack .../golang-github-vaughan0-go-ini-dev_0.0~git20130923.0.a98ad7e-1_all.deb ...
Unpacking golang-github-vaughan0-go-ini-dev (0.0~git20130923.0.a98ad7e-1) ...
Selecting previously unselected package golang-golang-x-tools-dev.
Preparing to unpack .../golang-golang-x-tools-dev_1%3a0.0~git20160315.0.f42ec61-2_all.deb ...
Unpacking golang-golang-x-tools-dev (1:0.0~git20160315.0.f42ec61-2) ...
Selecting previously unselected package golang-github-aws-aws-sdk-go-dev.
Preparing to unpack .../golang-github-aws-aws-sdk-go-dev_1.1.14+dfsg-1_all.deb ...
Unpacking golang-github-aws-aws-sdk-go-dev (1.1.14+dfsg-1) ...
Selecting previously unselected package golang-github-bgentry-speakeasy-dev.
Preparing to unpack .../golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1_all.deb ...
Unpacking golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Selecting previously unselected package golang-github-boltdb-bolt-dev.
Preparing to unpack .../golang-github-boltdb-bolt-dev_1.2.0-1_all.deb ...
Unpacking golang-github-boltdb-bolt-dev (1.2.0-1) ...
Selecting previously unselected package golang-github-coreos-go-systemd-dev.
Preparing to unpack .../golang-github-coreos-go-systemd-dev_5-1_all.deb ...
Unpacking golang-github-coreos-go-systemd-dev (5-1) ...
Selecting previously unselected package golang-github-docker-go-units-dev.
Preparing to unpack .../golang-github-docker-go-units-dev_0.3.0-1_all.deb ...
Unpacking golang-github-docker-go-units-dev (0.3.0-1) ...
Selecting previously unselected package golang-github-dustin-go-humanize-dev.
Preparing to unpack .../golang-github-dustin-go-humanize-dev_0.0~git20151125.0.8929fe9-1_all.deb ...
Unpacking golang-github-dustin-go-humanize-dev (0.0~git20151125.0.8929fe9-1) ...
Selecting previously unselected package golang-github-elazarl-go-bindata-assetfs-dev.
Preparing to unpack .../golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1_all.deb ...
Unpacking golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonpointer-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonpointer-dev_0.0~git20151027.0.e0fe6f6-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonpointer-dev (0.0~git20151027.0.e0fe6f6-1) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonreference-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonreference-dev_0.0~git20150808.0.e02fc20-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonreference-dev (0.0~git20150808.0.e02fc20-1) ...
Selecting previously unselected package golang-github-xeipuuv-gojsonschema-dev.
Preparing to unpack .../golang-github-xeipuuv-gojsonschema-dev_0.0~git20160323.0.93e72a7-1_all.deb ...
Unpacking golang-github-xeipuuv-gojsonschema-dev (0.0~git20160323.0.93e72a7-1) ...
Selecting previously unselected package golang-github-opencontainers-specs-dev.
Preparing to unpack .../golang-github-opencontainers-specs-dev_0.4.0-1_all.deb ...
Unpacking golang-github-opencontainers-specs-dev (0.4.0-1) ...
Selecting previously unselected package golang-github-seccomp-libseccomp-golang-dev.
Preparing to unpack .../golang-github-seccomp-libseccomp-golang-dev_0.0~git20150813.0.1b506fc-1_all.deb ...
Unpacking golang-github-seccomp-libseccomp-golang-dev (0.0~git20150813.0.1b506fc-1) ...
Selecting previously unselected package golang-github-vishvananda-netns-dev.
Preparing to unpack .../golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1_all.deb ...
Unpacking golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Selecting previously unselected package golang-github-vishvananda-netlink-dev.
Preparing to unpack .../golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1_all.deb ...
Unpacking golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Selecting previously unselected package golang-gocapability-dev.
Preparing to unpack .../golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1_all.deb ...
Unpacking golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Selecting previously unselected package golang-github-opencontainers-runc-dev.
Preparing to unpack .../golang-github-opencontainers-runc-dev_0.0.8+dfsg-2_all.deb ...
Unpacking golang-github-opencontainers-runc-dev (0.0.8+dfsg-2) ...
Selecting previously unselected package golang-github-gorilla-mux-dev.
Preparing to unpack .../golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1_all.deb ...
Unpacking golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Selecting previously unselected package golang-golang-x-sys-dev.
Preparing to unpack .../golang-golang-x-sys-dev_0.0~git20150612-1_all.deb ...
Unpacking golang-golang-x-sys-dev (0.0~git20150612-1) ...
Selecting previously unselected package golang-github-fsouza-go-dockerclient-dev.
Preparing to unpack .../golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-1_all.deb ...
Unpacking golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-1) ...
Selecting previously unselected package golang-github-gorhill-cronexpr-dev.
Preparing to unpack .../golang-github-gorhill-cronexpr-dev_1.0.0-1_all.deb ...
Unpacking golang-github-gorhill-cronexpr-dev (1.0.0-1) ...
Selecting previously unselected package golang-github-hashicorp-go-cleanhttp-dev.
Preparing to unpack .../golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1_all.deb ...
Unpacking golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Selecting previously unselected package golang-github-hashicorp-go-checkpoint-dev.
Preparing to unpack .../golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1_all.deb ...
Unpacking golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Selecting previously unselected package golang-github-hashicorp-golang-lru-dev.
Preparing to unpack .../golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1_all.deb ...
Unpacking golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Selecting previously unselected package golang-github-hashicorp-uuid-dev.
Preparing to unpack .../golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1_all.deb ...
Unpacking golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Selecting previously unselected package golang-github-hashicorp-go-immutable-radix-dev.
Preparing to unpack .../golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1_all.deb ...
Unpacking golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Selecting previously unselected package golang-github-hashicorp-go-memdb-dev.
Preparing to unpack .../golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1_all.deb ...
Unpacking golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Selecting previously unselected package golang-github-ugorji-go-msgpack-dev.
Preparing to unpack .../golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1_all.deb ...
Unpacking golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Selecting previously unselected package golang-github-ugorji-go-codec-dev.
Preparing to unpack .../golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1_all.deb ...
Unpacking golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Selecting previously unselected package golang-gopkg-vmihailenco-msgpack.v2-dev.
Preparing to unpack .../golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.11-1_all.deb ...
Unpacking golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.11-1) ...
Selecting previously unselected package golang-gopkg-tomb.v2-dev.
Preparing to unpack .../golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1_all.deb ...
Unpacking golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Selecting previously unselected package golang-gopkg-mgo.v2-dev.
Preparing to unpack .../golang-gopkg-mgo.v2-dev_2015.12.06-1_all.deb ...
Unpacking golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Selecting previously unselected package golang-github-hashicorp-go-msgpack-dev.
Preparing to unpack .../golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1_all.deb ...
Unpacking golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Selecting previously unselected package golang-github-hashicorp-go-reap-dev.
Preparing to unpack .../golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1_all.deb ...
Unpacking golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Selecting previously unselected package golang-github-hashicorp-go-syslog-dev.
Preparing to unpack .../golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1_all.deb ...
Unpacking golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Selecting previously unselected package golang-github-hashicorp-hcl-dev.
Preparing to unpack .../golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1_all.deb ...
Unpacking golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Selecting previously unselected package golang-github-hashicorp-logutils-dev.
Preparing to unpack .../golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1_all.deb ...
Unpacking golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Selecting previously unselected package golang-github-hashicorp-memberlist-dev.
Preparing to unpack .../golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1_all.deb ...
Unpacking golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-dev.
Preparing to unpack .../golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1_all.deb ...
Unpacking golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Selecting previously unselected package golang-github-hashicorp-raft-boltdb-dev.
Preparing to unpack .../golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1_all.deb ...
Unpacking golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Selecting previously unselected package golang-github-hashicorp-errwrap-dev.
Preparing to unpack .../golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1_all.deb ...
Unpacking golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Selecting previously unselected package golang-github-hashicorp-go-multierror-dev.
Preparing to unpack .../golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1_all.deb ...
Unpacking golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Selecting previously unselected package golang-github-hashicorp-net-rpc-msgpackrpc-dev.
Preparing to unpack .../golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1_all.deb ...
Unpacking golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Selecting previously unselected package golang-github-hashicorp-yamux-dev.
Preparing to unpack .../golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1_all.deb ...
Unpacking golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Selecting previously unselected package golang-github-hashicorp-scada-client-dev.
Preparing to unpack .../golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1_all.deb ...
Unpacking golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Selecting previously unselected package golang-github-hashicorp-mdns-dev.
Preparing to unpack .../golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1_all.deb ...
Unpacking golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Selecting previously unselected package golang-github-mitchellh-cli-dev.
Preparing to unpack .../golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1_all.deb ...
Unpacking golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Selecting previously unselected package golang-github-mitchellh-mapstructure-dev.
Preparing to unpack .../golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2_all.deb ...
Unpacking golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Selecting previously unselected package golang-github-ryanuber-columnize-dev.
Preparing to unpack .../golang-github-ryanuber-columnize-dev_2.1.0-1_all.deb ...
Unpacking golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Selecting previously unselected package golang-github-hashicorp-serf-dev.
Preparing to unpack .../golang-github-hashicorp-serf-dev_0.7.0~ds1-1_all.deb ...
Unpacking golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Selecting previously unselected package golang-github-inconshreveable-muxado-dev.
Preparing to unpack .../golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1_all.deb ...
Unpacking golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Selecting previously unselected package golang-github-hashicorp-consul-dev.
Preparing to unpack .../golang-github-hashicorp-consul-dev_0.6.3~dfsg-2_all.deb ...
Unpacking golang-github-hashicorp-consul-dev (0.6.3~dfsg-2) ...
Selecting previously unselected package golang-github-hashicorp-go-getter-dev.
Preparing to unpack .../golang-github-hashicorp-go-getter-dev_0.0~git20160316.0.575ec4e-1_all.deb ...
Unpacking golang-github-hashicorp-go-getter-dev (0.0~git20160316.0.575ec4e-1) ...
Selecting previously unselected package golang-github-hashicorp-go-plugin-dev.
Preparing to unpack .../golang-github-hashicorp-go-plugin-dev_0.0~git20160212.0.cccb4a1-1_all.deb ...
Unpacking golang-github-hashicorp-go-plugin-dev (0.0~git20160212.0.cccb4a1-1) ...
Selecting previously unselected package golang-github-hashicorp-go-version-dev.
Preparing to unpack .../golang-github-hashicorp-go-version-dev_0.0~git20150915.0.2b9865f-1_all.deb ...
Unpacking golang-github-hashicorp-go-version-dev (0.0~git20150915.0.2b9865f-1) ...
Selecting previously unselected package golang-golang-x-tools.
Preparing to unpack .../golang-golang-x-tools_1%3a0.0~git20160315.0.f42ec61-2_armhf.deb ...
Unpacking golang-golang-x-tools (1:0.0~git20160315.0.f42ec61-2) ...
Selecting previously unselected package golang-github-mitchellh-reflectwalk-dev.
Preparing to unpack .../golang-github-mitchellh-reflectwalk-dev_0.0~git20150527.0.eecf4c7-1_all.deb ...
Unpacking golang-github-mitchellh-reflectwalk-dev (0.0~git20150527.0.eecf4c7-1) ...
Selecting previously unselected package golang-github-mitchellh-copystructure-dev.
Preparing to unpack .../golang-github-mitchellh-copystructure-dev_0.0~git20160128.0.80adcec-1_all.deb ...
Unpacking golang-github-mitchellh-copystructure-dev (0.0~git20160128.0.80adcec-1) ...
Selecting previously unselected package golang-github-mitchellh-hashstructure-dev.
Preparing to unpack .../golang-github-mitchellh-hashstructure-dev_0.0~git20160209.0.6b17d66-1_all.deb ...
Unpacking golang-github-mitchellh-hashstructure-dev (0.0~git20160209.0.6b17d66-1) ...
Selecting previously unselected package golang-github-prometheus-client-model-dev.
Preparing to unpack .../golang-github-prometheus-client-model-dev_0.0.2+git20150212.12.fa8ad6f-1_all.deb ...
Unpacking golang-github-prometheus-client-model-dev (0.0.2+git20150212.12.fa8ad6f-1) ...
Selecting previously unselected package golang-github-shirou-gopsutil-dev.
Preparing to unpack .../golang-github-shirou-gopsutil-dev_1.0.0+git20160112-1_all.deb ...
Unpacking golang-github-shirou-gopsutil-dev (1.0.0+git20160112-1) ...
Selecting previously unselected package sbuild-build-depends-nomad-dummy.
Preparing to unpack .../sbuild-build-depends-nomad-dummy.deb ...
Unpacking sbuild-build-depends-nomad-dummy (0.invalid.0) ...
Processing triggers for libc-bin (2.22-3) ...
Setting up groff-base (1.22.3-7) ...
Setting up libbsd0:armhf (0.8.2-1) ...
Setting up bsdmainutils (9.0.10) ...
update-alternatives: using /usr/bin/bsd-write to provide /usr/bin/write (write) in auto mode
update-alternatives: using /usr/bin/bsd-from to provide /usr/bin/from (from) in auto mode
Setting up libpipeline1:armhf (1.4.1-2) ...
Setting up man-db (2.7.5-1) ...
Not building database; man-db/auto-update is not 'true'.
Setting up liblmdb0:armhf (0.9.18-1) ...
Setting up liblmdb-dev:armhf (0.9.18-1) ...
Setting up libunistring0:armhf (0.9.3-5.2) ...
Setting up libssl1.0.2:armhf (1.0.2g-1) ...
Setting up libmagic1:armhf (1:5.25-2) ...
Setting up file (1:5.25-2) ...
Setting up gettext-base (0.19.7-2) ...
Setting up libsasl2-modules-db:armhf (2.1.26.dfsg1-15) ...
Setting up libsasl2-2:armhf (2.1.26.dfsg1-15) ...
Setting up libicu55:armhf (55.1-7) ...
Setting up libxml2:armhf (2.9.3+dfsg1-1) ...
Setting up libsigsegv2:armhf (2.10-5) ...
Setting up m4 (1.4.17-5) ...
Setting up autoconf (2.69-10) ...
Setting up autotools-dev (20150820.1) ...
Setting up automake (1:1.15-4) ...
update-alternatives: using /usr/bin/automake-1.15 to provide /usr/bin/automake (automake) in auto mode
Setting up autopoint (0.19.7-2) ...
Setting up openssl (1.0.2g-1) ...
Setting up ca-certificates (20160104) ...
Setting up libffi6:armhf (3.2.1-4) ...
Setting up libglib2.0-0:armhf (2.48.0-1) ...
No schema files found: doing nothing.
Setting up libcroco3:armhf (0.6.11-1) ...
Setting up gettext (0.19.7-2) ...
Setting up intltool-debian (0.35.0+20060710.4) ...
Setting up po-debconf (1.0.19) ...
Setting up libarchive-zip-perl (1.57-1) ...
Setting up libfile-stripnondeterminism-perl (0.016-1) ...
Setting up libtimedate-perl (2.3000-2) ...
Setting up libtool (2.4.6-0.1) ...
Setting up golang-src (2:1.6-1+rpi1) ...
Setting up golang-go (2:1.6-1+rpi1) ...
update-alternatives: using /usr/lib/go/bin/go to provide /usr/bin/go (go) in auto mode
Setting up golang-text-dev (0.0~git20130502-1) ...
Setting up golang-pretty-dev (0.0~git20130613-1) ...
Setting up golang-github-bmizerany-assert-dev (0.0~git20120716-1) ...
Setting up golang-github-bitly-go-simplejson-dev (0.5.0-1) ...
Setting up golang-github-docker-docker-dev (1.8.3~ds1-2) ...
Setting up golang-github-mattn-go-isatty-dev (0.0.1-1) ...
Setting up libjs-jquery (1.11.3+dfsg-4) ...
Setting up libjs-jquery-ui (1.10.1+dfsg-1) ...
Setting up libprotobuf9v5:armhf (2.6.1-1.3) ...
Setting up libprotoc9v5:armhf (2.6.1-1.3) ...
Setting up libsasl2-dev (2.1.26.dfsg1-15) ...
Setting up libseccomp-dev:armhf (2.3.0-1) ...
Setting up libsystemd-dev:armhf (229-4) ...
Setting up pkg-config (0.29-3) ...
Setting up protobuf-compiler (2.6.1-1.3) ...
Setting up golang-check.v1-dev (0.0+git20150729.11d3bc7-3) ...
Setting up golang-github-codegangsta-cli-dev (0.0~git20151221-1) ...
Setting up golang-codegangsta-cli-dev (0.0~git20151221-1) ...
Setting up golang-context-dev (0.0~git20140604.1.14f550f-1) ...
Setting up golang-dbus-dev (3-1) ...
Setting up golang-dns-dev (0.0~git20151030.0.6a15566-1) ...
Setting up golang-github-agtorre-gocolorize-dev (1.0.0-1) ...
Setting up golang-github-armon-circbuf-dev (0.0~git20150827.0.bbbad09-1) ...
Setting up golang-github-julienschmidt-httprouter-dev (1.1-1) ...
Setting up golang-x-text-dev (0+git20151217.cf49866-1) ...
Setting up golang-golang-x-crypto-dev (1:0.0~git20151201.0.7b85b09-2) ...
Setting up golang-golang-x-net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Setting up golang-goprotobuf-dev (0.0~git20150526-2) ...
Setting up golang-github-kardianos-osext-dev (0.0~git20151124.0.10da294-2) ...
Setting up golang-github-bugsnag-panicwrap-dev (1.1.0-1) ...
Setting up golang-github-juju-loggo-dev (0.0~git20150527.0.8477fc9-1) ...
Setting up golang-github-bradfitz-gomemcache-dev (0.0~git20141109-1) ...
Setting up golang-github-garyburd-redigo-dev (0.0~git20150901.0.d8dbe4d-1) ...
Setting up golang-github-go-fsnotify-fsnotify-dev (1.2.9-1) ...
Setting up golang-github-robfig-config-dev (0.0~git20141208-1) ...
Setting up golang-github-robfig-pathtree-dev (0.0~git20140121-1) ...
Setting up golang-github-revel-revel-dev (0.12.0+dfsg-1) ...
Setting up golang-github-bugsnag-bugsnag-go-dev (1.0.5+dfsg-1) ...
Setting up golang-github-getsentry-raven-go-dev (0.0~git20150721.0.74c334d-1) ...
Setting up golang-objx-dev (0.0~git20140527-4) ...
Setting up golang-github-stretchr-testify-dev (1.0-2) ...
Setting up golang-github-stvp-go-udp-testing-dev (0.0~git20150316.0.abcd331-1) ...
Setting up golang-github-tobi-airbrake-go-dev (0.0~git20150109-1) ...
Setting up golang-github-sirupsen-logrus-dev (0.8.7-3) ...
Setting up golang-logrus-dev (0.8.7-3) ...
Setting up golang-protobuf-extensions-dev (0+git20150513.fc2b8d3-4) ...
Setting up golang-yaml.v2-dev (0.0+git20160301.0.a83829b-1) ...
Setting up golang-github-prometheus-common-dev (0+git20160321.4045694-1) ...
Setting up golang-procfs-dev (0+git20150616.c91d8ee-1) ...
Setting up golang-prometheus-client-dev (0.7.0+ds-3) ...
Setting up golang-github-datadog-datadog-go-dev (0.0~git20150930.0.b050cd8-1) ...
Setting up golang-github-armon-go-metrics-dev (0.0~git20151207.0.06b6099-1) ...
Setting up golang-github-armon-go-radix-dev (0.0~git20150602.0.fbd82e8-1) ...
Setting up golang-github-armon-gomdb-dev (0.0~git20150106.0.151f2e0-1) ...
Setting up golang-github-go-ini-ini-dev (1.8.6-2) ...
Setting up golang-github-jmespath-go-jmespath-dev (0.2.2-1) ...
Setting up golang-github-shiena-ansicolor-dev (0.0~git20151119.0.a422bbe-1) ...
Setting up golang-github-lsegal-gucumber-dev (0.0~git20160110.0.44a4d7e-1) ...
Setting up golang-github-jacobsa-oglematchers-dev (0.0~git20150320-1) ...
Setting up golang-github-jacobsa-oglemock-dev (0.0~git20150428-2) ...
Setting up golang-go.net-dev (1:0.0+git20160110.4fd4a9f-1) ...
Setting up golang-github-jacobsa-reqtrace-dev (0.0~git20150505-2) ...
Setting up golang-github-jacobsa-ogletest-dev (0.0~git20150610-4) ...
Setting up golang-github-smartystreets-goconvey-dev (1.5.0-1) ...
Setting up golang-github-vaughan0-go-ini-dev (0.0~git20130923.0.a98ad7e-1) ...
Setting up golang-golang-x-tools-dev (1:0.0~git20160315.0.f42ec61-2) ...
Setting up golang-github-aws-aws-sdk-go-dev (1.1.14+dfsg-1) ...
Setting up golang-github-bgentry-speakeasy-dev (0.0~git20150902.0.36e9cfd-1) ...
Setting up golang-github-boltdb-bolt-dev (1.2.0-1) ...
Setting up golang-github-coreos-go-systemd-dev (5-1) ...
Setting up golang-github-docker-go-units-dev (0.3.0-1) ...
Setting up golang-github-dustin-go-humanize-dev (0.0~git20151125.0.8929fe9-1) ...
Setting up golang-github-elazarl-go-bindata-assetfs-dev (0.0~git20151224.0.57eb5e1-1) ...
Setting up golang-github-xeipuuv-gojsonpointer-dev (0.0~git20151027.0.e0fe6f6-1) ...
Setting up golang-github-xeipuuv-gojsonreference-dev (0.0~git20150808.0.e02fc20-1) ...
Setting up golang-github-xeipuuv-gojsonschema-dev (0.0~git20160323.0.93e72a7-1) ...
Setting up golang-github-opencontainers-specs-dev (0.4.0-1) ...
Setting up golang-github-seccomp-libseccomp-golang-dev (0.0~git20150813.0.1b506fc-1) ...
Setting up golang-github-vishvananda-netns-dev (0.0~git20150710.0.604eaf1-1) ...
Setting up golang-github-vishvananda-netlink-dev (0.0~git20160306.0.4fdf23c-1) ...
Setting up golang-gocapability-dev (0.0~git20150506.1.66ef2aa-1) ...
Setting up golang-github-opencontainers-runc-dev (0.0.8+dfsg-2) ...
Setting up golang-github-gorilla-mux-dev (0.0~git20150814.0.f7b6aaa-1) ...
Setting up golang-golang-x-sys-dev (0.0~git20150612-1) ...
Setting up golang-github-fsouza-go-dockerclient-dev (0.0+git20160316-1) ...
Setting up golang-github-gorhill-cronexpr-dev (1.0.0-1) ...
Setting up golang-github-hashicorp-go-cleanhttp-dev (0.0~git20160217.0.875fb67-1) ...
Setting up golang-github-hashicorp-go-checkpoint-dev (0.0~git20151022.0.e4b2dc3-1) ...
Setting up golang-github-hashicorp-golang-lru-dev (0.0~git20160207.0.a0d98a5-1) ...
Setting up golang-github-hashicorp-uuid-dev (0.0~git20160218.0.6994546-1) ...
Setting up golang-github-hashicorp-go-immutable-radix-dev (0.0~git20160222.0.8e8ed81-1) ...
Setting up golang-github-hashicorp-go-memdb-dev (0.0~git20160301.0.98f52f5-1) ...
Setting up golang-github-ugorji-go-msgpack-dev (0.0~git20130605.792643-1) ...
Setting up golang-github-ugorji-go-codec-dev (0.0~git20151130.0.357a44b-1) ...
Setting up golang-gopkg-vmihailenco-msgpack.v2-dev (2.4.11-1) ...
Setting up golang-gopkg-tomb.v2-dev (0.0~git20140626.14b3d72-1) ...
Setting up golang-gopkg-mgo.v2-dev (2015.12.06-1) ...
Setting up golang-github-hashicorp-go-msgpack-dev (0.0~git20150518-1) ...
Setting up golang-github-hashicorp-go-reap-dev (0.0~git20160113.0.2d85522-1) ...
Setting up golang-github-hashicorp-go-syslog-dev (0.0~git20150218.0.42a2b57-1) ...
Setting up golang-github-hashicorp-hcl-dev (0.0~git20151110.0.fa160f1-1) ...
Setting up golang-github-hashicorp-logutils-dev (0.0~git20150609.0.0dc08b1-1) ...
Setting up golang-github-hashicorp-memberlist-dev (0.0~git20160225.0.ae9a8d9-1) ...
Setting up golang-github-hashicorp-raft-dev (0.0~git20160317.0.3359516-1) ...
Setting up golang-github-hashicorp-raft-boltdb-dev (0.0~git20150201.d1e82c1-1) ...
Setting up golang-github-hashicorp-errwrap-dev (0.0~git20141028.0.7554cd9-1) ...
Setting up golang-github-hashicorp-go-multierror-dev (0.0~git20150916.0.d30f099-1) ...
Setting up golang-github-hashicorp-net-rpc-msgpackrpc-dev (0.0~git20151116.0.a14192a-1) ...
Setting up golang-github-hashicorp-yamux-dev (0.0~git20151129.0.df94978-1) ...
Setting up golang-github-hashicorp-scada-client-dev (0.0~git20150828.0.84989fd-1) ...
Setting up golang-github-hashicorp-mdns-dev (0.0~git20150317.0.2b439d3-1) ...
Setting up golang-github-mitchellh-cli-dev (0.0~git20160203.0.5c87c51-1) ...
Setting up golang-github-mitchellh-mapstructure-dev (0.0~git20150717.0.281073e-2) ...
Setting up golang-github-ryanuber-columnize-dev (2.1.0-1) ...
Setting up golang-github-hashicorp-serf-dev (0.7.0~ds1-1) ...
Setting up golang-github-inconshreveable-muxado-dev (0.0~git20140312.0.f693c7e-1) ...
Setting up golang-github-hashicorp-consul-dev (0.6.3~dfsg-2) ...
Setting up golang-github-hashicorp-go-getter-dev (0.0~git20160316.0.575ec4e-1) ...
Setting up golang-github-hashicorp-go-plugin-dev (0.0~git20160212.0.cccb4a1-1) ...
Setting up golang-github-hashicorp-go-version-dev (0.0~git20150915.0.2b9865f-1) ...
Setting up golang-golang-x-tools (1:0.0~git20160315.0.f42ec61-2) ...
Setting up golang-github-mitchellh-reflectwalk-dev (0.0~git20150527.0.eecf4c7-1) ...
Setting up golang-github-mitchellh-copystructure-dev (0.0~git20160128.0.80adcec-1) ...
Setting up golang-github-mitchellh-hashstructure-dev (0.0~git20160209.0.6b17d66-1) ...
Setting up golang-github-prometheus-client-model-dev (0.0.2+git20150212.12.fa8ad6f-1) ...
Setting up golang-github-shirou-gopsutil-dev (1.0.0+git20160112-1) ...
Setting up dh-autoreconf (12) ...
Setting up debhelper (9.20160403) ...
Setting up dh-golang (1.12) ...
Setting up dh-systemd (1.29) ...
Setting up sbuild-build-depends-nomad-dummy (0.invalid.0) ...
Setting up dh-strip-nondeterminism (0.016-1) ...
Processing triggers for libc-bin (2.22-3) ...
Processing triggers for ca-certificates (20160104) ...
Updating certificates in /etc/ssl/certs...
173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d...
done.
W: No sandbox user '_apt' on the system, can not drop privileges

+------------------------------------------------------------------------------+
| Build environment                                                            |
+------------------------------------------------------------------------------+

Kernel: Linux 3.19.0-trunk-armmp armhf (armv7l)
Toolchain package versions: binutils_2.26-5 dpkg-dev_1.18.4 g++-5_5.3.1-11 gcc-5_5.3.1-11 libc6-dev_2.22-3 libstdc++-5-dev_5.3.1-11 libstdc++6_5.3.1-11 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch
Package versions: adduser_3.114 apt_1.2.6 autoconf_2.69-10 automake_1:1.15-4 autopoint_0.19.7-2 autotools-dev_20150820.1 base-files_9.5+rpi1 base-passwd_3.5.39 bash_4.3-14 binutils_2.26-5 bsdmainutils_9.0.10 bsdutils_1:2.27.1-6 build-essential_11.7 bzip2_1.0.6-8 ca-certificates_20160104 console-setup_1.139 console-setup-linux_1.139 coreutils_8.25-2 cpio_2.11+dfsg-5 cpp_4:5.3.1-1+rpi1 cpp-5_5.3.1-11 dash_0.5.8-2.1 debconf_1.5.59 debfoster_2.7-2 debhelper_9.20160403 debianutils_4.7 dh-autoreconf_12 dh-golang_1.12 dh-strip-nondeterminism_0.016-1 dh-systemd_1.29 diffutils_1:3.3-3 dmsetup_2:1.02.116-1 dpkg_1.18.4 dpkg-dev_1.18.4 e2fslibs_1.42.13-1 e2fsprogs_1.42.13-1 fakeroot_1.20.2-1 file_1:5.25-2 findutils_4.6.0+git+20160126-2 g++_4:5.3.1-1+rpi1 g++-5_5.3.1-11 gcc_4:5.3.1-1+rpi1 gcc-4.6-base_4.6.4-5+rpi1 gcc-4.7-base_4.7.3-11+rpi1 gcc-4.8-base_4.8.5-4 gcc-4.9-base_4.9.3-12 gcc-5_5.3.1-11 gcc-5-base_5.3.1-11 gettext_0.19.7-2 gettext-base_0.19.7-2 gnupg_1.4.20-4 golang-check.v1-dev_0.0+git20150729.11d3bc7-3 golang-codegangsta-cli-dev_0.0~git20151221-1 golang-context-dev_0.0~git20140604.1.14f550f-1 golang-dbus-dev_3-1 golang-dns-dev_0.0~git20151030.0.6a15566-1 golang-github-agtorre-gocolorize-dev_1.0.0-1 golang-github-armon-circbuf-dev_0.0~git20150827.0.bbbad09-1 golang-github-armon-go-metrics-dev_0.0~git20151207.0.06b6099-1 golang-github-armon-go-radix-dev_0.0~git20150602.0.fbd82e8-1 golang-github-armon-gomdb-dev_0.0~git20150106.0.151f2e0-1 golang-github-aws-aws-sdk-go-dev_1.1.14+dfsg-1 golang-github-bgentry-speakeasy-dev_0.0~git20150902.0.36e9cfd-1 golang-github-bitly-go-simplejson-dev_0.5.0-1 golang-github-bmizerany-assert-dev_0.0~git20120716-1 golang-github-boltdb-bolt-dev_1.2.0-1 golang-github-bradfitz-gomemcache-dev_0.0~git20141109-1 golang-github-bugsnag-bugsnag-go-dev_1.0.5+dfsg-1 golang-github-bugsnag-panicwrap-dev_1.1.0-1 golang-github-codegangsta-cli-dev_0.0~git20151221-1 golang-github-coreos-go-systemd-dev_5-1 golang-github-datadog-datadog-go-dev_0.0~git20150930.0.b050cd8-1 golang-github-docker-docker-dev_1.8.3~ds1-2 golang-github-docker-go-units-dev_0.3.0-1 golang-github-dustin-go-humanize-dev_0.0~git20151125.0.8929fe9-1 golang-github-elazarl-go-bindata-assetfs-dev_0.0~git20151224.0.57eb5e1-1 golang-github-fsouza-go-dockerclient-dev_0.0+git20160316-1 golang-github-garyburd-redigo-dev_0.0~git20150901.0.d8dbe4d-1 golang-github-getsentry-raven-go-dev_0.0~git20150721.0.74c334d-1 golang-github-go-fsnotify-fsnotify-dev_1.2.9-1 golang-github-go-ini-ini-dev_1.8.6-2 golang-github-gorhill-cronexpr-dev_1.0.0-1 golang-github-gorilla-mux-dev_0.0~git20150814.0.f7b6aaa-1 golang-github-hashicorp-consul-dev_0.6.3~dfsg-2 golang-github-hashicorp-errwrap-dev_0.0~git20141028.0.7554cd9-1 golang-github-hashicorp-go-checkpoint-dev_0.0~git20151022.0.e4b2dc3-1 golang-github-hashicorp-go-cleanhttp-dev_0.0~git20160217.0.875fb67-1 golang-github-hashicorp-go-getter-dev_0.0~git20160316.0.575ec4e-1 golang-github-hashicorp-go-immutable-radix-dev_0.0~git20160222.0.8e8ed81-1 golang-github-hashicorp-go-memdb-dev_0.0~git20160301.0.98f52f5-1 golang-github-hashicorp-go-msgpack-dev_0.0~git20150518-1 golang-github-hashicorp-go-multierror-dev_0.0~git20150916.0.d30f099-1 golang-github-hashicorp-go-plugin-dev_0.0~git20160212.0.cccb4a1-1 golang-github-hashicorp-go-reap-dev_0.0~git20160113.0.2d85522-1 golang-github-hashicorp-go-syslog-dev_0.0~git20150218.0.42a2b57-1 golang-github-hashicorp-go-version-dev_0.0~git20150915.0.2b9865f-1 golang-github-hashicorp-golang-lru-dev_0.0~git20160207.0.a0d98a5-1 golang-github-hashicorp-hcl-dev_0.0~git20151110.0.fa160f1-1 golang-github-hashicorp-logutils-dev_0.0~git20150609.0.0dc08b1-1 golang-github-hashicorp-mdns-dev_0.0~git20150317.0.2b439d3-1 golang-github-hashicorp-memberlist-dev_0.0~git20160225.0.ae9a8d9-1 golang-github-hashicorp-net-rpc-msgpackrpc-dev_0.0~git20151116.0.a14192a-1 golang-github-hashicorp-raft-boltdb-dev_0.0~git20150201.d1e82c1-1 golang-github-hashicorp-raft-dev_0.0~git20160317.0.3359516-1 golang-github-hashicorp-scada-client-dev_0.0~git20150828.0.84989fd-1 golang-github-hashicorp-serf-dev_0.7.0~ds1-1 golang-github-hashicorp-uuid-dev_0.0~git20160218.0.6994546-1 golang-github-hashicorp-yamux-dev_0.0~git20151129.0.df94978-1 golang-github-inconshreveable-muxado-dev_0.0~git20140312.0.f693c7e-1 golang-github-jacobsa-oglematchers-dev_0.0~git20150320-1 golang-github-jacobsa-oglemock-dev_0.0~git20150428-2 golang-github-jacobsa-ogletest-dev_0.0~git20150610-4 golang-github-jacobsa-reqtrace-dev_0.0~git20150505-2 golang-github-jmespath-go-jmespath-dev_0.2.2-1 golang-github-juju-loggo-dev_0.0~git20150527.0.8477fc9-1 golang-github-julienschmidt-httprouter-dev_1.1-1 golang-github-kardianos-osext-dev_0.0~git20151124.0.10da294-2 golang-github-lsegal-gucumber-dev_0.0~git20160110.0.44a4d7e-1 golang-github-mattn-go-isatty-dev_0.0.1-1 golang-github-mitchellh-cli-dev_0.0~git20160203.0.5c87c51-1 golang-github-mitchellh-copystructure-dev_0.0~git20160128.0.80adcec-1 golang-github-mitchellh-hashstructure-dev_0.0~git20160209.0.6b17d66-1 golang-github-mitchellh-mapstructure-dev_0.0~git20150717.0.281073e-2 golang-github-mitchellh-reflectwalk-dev_0.0~git20150527.0.eecf4c7-1 golang-github-opencontainers-runc-dev_0.0.8+dfsg-2 golang-github-opencontainers-specs-dev_0.4.0-1 golang-github-prometheus-client-model-dev_0.0.2+git20150212.12.fa8ad6f-1 golang-github-prometheus-common-dev_0+git20160321.4045694-1 golang-github-revel-revel-dev_0.12.0+dfsg-1 golang-github-robfig-config-dev_0.0~git20141208-1 golang-github-robfig-pathtree-dev_0.0~git20140121-1 golang-github-ryanuber-columnize-dev_2.1.0-1 golang-github-seccomp-libseccomp-golang-dev_0.0~git20150813.0.1b506fc-1 golang-github-shiena-ansicolor-dev_0.0~git20151119.0.a422bbe-1 golang-github-shirou-gopsutil-dev_1.0.0+git20160112-1 golang-github-sirupsen-logrus-dev_0.8.7-3 golang-github-smartystreets-goconvey-dev_1.5.0-1 golang-github-stretchr-testify-dev_1.0-2 golang-github-stvp-go-udp-testing-dev_0.0~git20150316.0.abcd331-1 golang-github-tobi-airbrake-go-dev_0.0~git20150109-1 golang-github-ugorji-go-codec-dev_0.0~git20151130.0.357a44b-1 golang-github-ugorji-go-msgpack-dev_0.0~git20130605.792643-1 golang-github-vaughan0-go-ini-dev_0.0~git20130923.0.a98ad7e-1 golang-github-vishvananda-netlink-dev_0.0~git20160306.0.4fdf23c-1 golang-github-vishvananda-netns-dev_0.0~git20150710.0.604eaf1-1 golang-github-xeipuuv-gojsonpointer-dev_0.0~git20151027.0.e0fe6f6-1 golang-github-xeipuuv-gojsonreference-dev_0.0~git20150808.0.e02fc20-1 golang-github-xeipuuv-gojsonschema-dev_0.0~git20160323.0.93e72a7-1 golang-go_2:1.6-1+rpi1 golang-go.net-dev_1:0.0+git20160110.4fd4a9f-1 golang-gocapability-dev_0.0~git20150506.1.66ef2aa-1 golang-golang-x-crypto-dev_1:0.0~git20151201.0.7b85b09-2 golang-golang-x-net-dev_1:0.0+git20160110.4fd4a9f-1 golang-golang-x-sys-dev_0.0~git20150612-1 golang-golang-x-tools_1:0.0~git20160315.0.f42ec61-2 golang-golang-x-tools-dev_1:0.0~git20160315.0.f42ec61-2 golang-gopkg-mgo.v2-dev_2015.12.06-1 golang-gopkg-tomb.v2-dev_0.0~git20140626.14b3d72-1 golang-gopkg-vmihailenco-msgpack.v2-dev_2.4.11-1 golang-goprotobuf-dev_0.0~git20150526-2 golang-logrus-dev_0.8.7-3 golang-objx-dev_0.0~git20140527-4 golang-pretty-dev_0.0~git20130613-1 golang-procfs-dev_0+git20150616.c91d8ee-1 golang-prometheus-client-dev_0.7.0+ds-3 golang-protobuf-extensions-dev_0+git20150513.fc2b8d3-4 golang-src_2:1.6-1+rpi1 golang-text-dev_0.0~git20130502-1 golang-x-text-dev_0+git20151217.cf49866-1 golang-yaml.v2-dev_0.0+git20160301.0.a83829b-1 gpgv_1.4.20-4 grep_2.22-1 groff-base_1.22.3-7 gzip_1.6-4 hostname_3.17 ifupdown_0.8.10 init_1.29 init-system-helpers_1.29 initscripts_2.88dsf-59.3 insserv_1.14.0-5.3 intltool-debian_0.35.0+20060710.4 iproute2_4.3.0-1 isc-dhcp-client_4.3.3-8 isc-dhcp-common_4.3.3-8 kbd_2.0.3-2 keyboard-configuration_1.139 klibc-utils_2.0.4-8+rpi1 kmod_22-1 libacl1_2.2.52-3 libapparmor1_2.10-3 libapt-pkg5.0_1.2.6 libarchive-zip-perl_1.57-1 libasan2_5.3.1-11 libatm1_1:2.5.1-1.5 libatomic1_5.3.1-11 libattr1_1:2.4.47-2 libaudit-common_1:2.4.5-1 libaudit1_1:2.4.5-1 libblkid1_2.27.1-6 libbsd0_0.8.2-1 libbz2-1.0_1.0.6-8 libc-bin_2.22-3 libc-dev-bin_2.22-3 libc6_2.22-3 libc6-dev_2.22-3 libcap2_1:2.24-12 libcap2-bin_1:2.24-12 libcc1-0_5.3.1-11 libcomerr2_1.42.13-1 libcroco3_0.6.11-1 libcryptsetup4_2:1.7.0-2 libdb5.3_5.3.28-11 libdbus-1-3_1.10.8-1 libdebconfclient0_0.207 libdevmapper1.02.1_2:1.02.116-1 libdns-export100_1:9.9.5.dfsg-12.1 libdpkg-perl_1.18.4 libdrm2_2.4.67-1 libfakeroot_1.20.2-1 libfdisk1_2.27.1-6 libffi6_3.2.1-4 libfile-stripnondeterminism-perl_0.016-1 libgc1c2_1:7.4.2-7.3 libgcc-5-dev_5.3.1-11 libgcc1_1:5.3.1-11 libgcrypt20_1.6.5-2 libgdbm3_1.8.3-13.1 libglib2.0-0_2.48.0-1 libgmp10_2:6.1.0+dfsg-2 libgomp1_5.3.1-11 libgpg-error0_1.21-2 libicu55_55.1-7 libisc-export95_1:9.9.5.dfsg-12.1 libisl15_0.16.1-1 libjs-jquery_1.11.3+dfsg-4 libjs-jquery-ui_1.10.1+dfsg-1 libklibc_2.0.4-8+rpi1 libkmod2_22-1 liblmdb-dev_0.9.18-1 liblmdb0_0.9.18-1 liblocale-gettext-perl_1.07-1+b1 liblz4-1_0.0~r131-2 liblzma5_5.1.1alpha+20120614-2.1 libmagic1_1:5.25-2 libmount1_2.27.1-6 libmpc3_1.0.3-1 libmpfr4_3.1.4-1 libncurses5_6.0+20160213-1 libncursesw5_6.0+20160213-1 libpam-modules_1.1.8-3.2 libpam-modules-bin_1.1.8-3.2 libpam-runtime_1.1.8-3.2 libpam0g_1.1.8-3.2 libpcre3_2:8.38-3 libperl5.22_5.22.1-9 libpipeline1_1.4.1-2 libplymouth4_0.9.2-3 libpng12-0_1.2.54-4 libprocps5_2:3.3.11-3 libprotobuf9v5_2.6.1-1.3 libprotoc9v5_2.6.1-1.3 libreadline6_6.3-8+b3 libsasl2-2_2.1.26.dfsg1-15 libsasl2-dev_2.1.26.dfsg1-15 libsasl2-modules-db_2.1.26.dfsg1-15 libseccomp-dev_2.3.0-1 libseccomp2_2.3.0-1 libselinux1_2.4-3 libsemanage-common_2.4-3 libsemanage1_2.4-3 libsepol1_2.4-2 libsigsegv2_2.10-5 libsmartcols1_2.27.1-6 libss2_1.42.13-1 libssl1.0.2_1.0.2g-1 libstdc++-5-dev_5.3.1-11 libstdc++6_5.3.1-11 libsystemd-dev_229-4 libsystemd0_229-4 libtimedate-perl_2.3000-2 libtinfo5_6.0+20160213-1 libtool_2.4.6-0.1 libubsan0_5.3.1-11 libudev1_229-2 libunistring0_0.9.3-5.2 libusb-0.1-4_2:0.1.12-28 libustr-1.0-1_1.0.4-5 libuuid1_2.27.1-6 libxml2_2.9.3+dfsg1-1 linux-libc-dev_3.18.5-1~exp1+rpi19+stretch login_1:4.2-3.1 lsb-base_9.20160110+rpi1 m4_1.4.17-5 make_4.1-6 makedev_2.3.1-93 man-db_2.7.5-1 manpages_4.04-2 mawk_1.3.3-17 mount_2.27.1-6 multiarch-support_2.22-3 ncurses-base_6.0+20160213-1 ncurses-bin_6.0+20160213-1 netbase_5.3 openssl_1.0.2g-1 passwd_1:4.2-3.1 patch_2.7.5-1 perl_5.22.1-9 perl-base_5.22.1-9 perl-modules-5.22_5.22.1-9 pkg-config_0.29-3 po-debconf_1.0.19 procps_2:3.3.11-3 protobuf-compiler_2.6.1-1.3 psmisc_22.21-2.1 raspbian-archive-keyring_20120528.2 readline-common_6.3-8 sbuild-build-depends-core-dummy_0.invalid.0 sbuild-build-depends-nomad-dummy_0.invalid.0 sed_4.2.2-7.1 sensible-utils_0.0.9 startpar_0.59-3 systemd_229-4 systemd-sysv_229-2 sysv-rc_2.88dsf-59.3 sysvinit-utils_2.88dsf-59.3 tar_1.28-2.1 tzdata_2016a-1 udev_229-2 util-linux_2.27.1-6 xkb-data_2.17-1 xz-utils_5.1.1alpha+20120614-2.1 zlib1g_1:1.2.8.dfsg-2+b1

+------------------------------------------------------------------------------+
| Build                                                                        |
+------------------------------------------------------------------------------+


Unpack source
-------------

gpgv: keyblock resource `/sbuild-nonexistent/.gnupg/trustedkeys.gpg': file open error
gpgv: Signature made Mon Mar 21 17:19:16 2016 UTC using RSA key ID 53968D1B
gpgv: Can't check signature: public key not found
dpkg-source: warning: failed to verify signature on ./nomad_0.3.1+dfsg-1.dsc
dpkg-source: info: extracting nomad in nomad-0.3.1+dfsg
dpkg-source: info: unpacking nomad_0.3.1+dfsg.orig.tar.xz
dpkg-source: info: unpacking nomad_0.3.1+dfsg-1.debian.tar.xz

Check disc space
----------------

Sufficient free space for build

User Environment
----------------

DEB_BUILD_OPTIONS=parallel=4
HOME=/sbuild-nonexistent
LOGNAME=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
SCHROOT_ALIAS_NAME=stretch-staging-armhf-sbuild
SCHROOT_CHROOT_NAME=stretch-staging-armhf-sbuild
SCHROOT_COMMAND=env
SCHROOT_GID=109
SCHROOT_GROUP=buildd
SCHROOT_SESSION_ID=stretch-staging-armhf-sbuild-065db6da-3980-4d70-895c-32e139fef513
SCHROOT_UID=104
SCHROOT_USER=buildd
SHELL=/bin/sh
TERM=xterm
USER=buildd

dpkg-buildpackage
-----------------

dpkg-buildpackage: source package nomad
dpkg-buildpackage: source version 0.3.1+dfsg-1
dpkg-buildpackage: source distribution unstable
 dpkg-source --before-build nomad-0.3.1+dfsg
dpkg-buildpackage: host architecture armhf
 fakeroot debian/rules clean
dh clean --buildsystem=golang --with=golang,systemd
   dh_testdir -O--buildsystem=golang
   dh_auto_clean -O--buildsystem=golang
   debian/rules override_dh_clean
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_clean
## Remove Files-Excluded (when built from checkout or non-DFSG tarball):
rm -f -rv `perl -0nE 'say $1 if m{^Files\-Excluded\:\s*(.*?)(?:\n\n|Files:|Comment:)}sm;' debian/copyright`
find vendor -type d -empty -delete -print
vendor/github.com/DataDog
vendor/github.com/StackExchange
vendor/github.com/armon
vendor/github.com/aws
vendor/github.com/bgentry
vendor/github.com/boltdb
vendor/github.com/coreos
vendor/github.com/docker
vendor/github.com/dustin
vendor/github.com/fsouza
vendor/github.com/go-ini
vendor/github.com/go-ole
vendor/github.com/godbus
vendor/github.com/golang
vendor/github.com/gorhill
vendor/github.com/hashicorp
vendor/github.com/jmespath
vendor/github.com/kardianos
vendor/github.com/mattn
vendor/github.com/matttproud
vendor/github.com/mitchellh
vendor/github.com/opencontainers
vendor/github.com/prometheus
vendor/github.com/ryanuber
vendor/github.com/shirou
vendor/github.com/ugorji
vendor/github.com
vendor/golang.org/x
vendor/golang.org
vendor
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
 debian/rules build-arch
dh build-arch --buildsystem=golang --with=golang,systemd
   dh_testdir -a -O--buildsystem=golang
   dh_update_autotools_config -a -O--buildsystem=golang
   dh_auto_configure -a -O--buildsystem=golang
   dh_auto_build -a -O--buildsystem=golang
	go install -v github.com/hashicorp/nomad github.com/hashicorp/nomad/api github.com/hashicorp/nomad/client github.com/hashicorp/nomad/client/allocdir github.com/hashicorp/nomad/client/config github.com/hashicorp/nomad/client/driver github.com/hashicorp/nomad/client/driver/env github.com/hashicorp/nomad/client/driver/executor github.com/hashicorp/nomad/client/driver/logging github.com/hashicorp/nomad/client/driver/structs github.com/hashicorp/nomad/client/fingerprint github.com/hashicorp/nomad/client/getter github.com/hashicorp/nomad/client/testutil github.com/hashicorp/nomad/command github.com/hashicorp/nomad/command/agent github.com/hashicorp/nomad/helper/args github.com/hashicorp/nomad/helper/discover github.com/hashicorp/nomad/helper/flag-slice github.com/hashicorp/nomad/helper/gated-writer github.com/hashicorp/nomad/helper/testtask github.com/hashicorp/nomad/jobspec github.com/hashicorp/nomad/nomad github.com/hashicorp/nomad/nomad/mock github.com/hashicorp/nomad/nomad/state github.com/hashicorp/nomad/nomad/structs github.com/hashicorp/nomad/nomad/watch github.com/hashicorp/nomad/scheduler github.com/hashicorp/nomad/testutil
github.com/dustin/go-humanize
github.com/hashicorp/yamux
github.com/hashicorp/go-cleanhttp
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/opts
github.com/hashicorp/nomad/api
github.com/Sirupsen/logrus
github.com/docker/go-units
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/system
github.com/hashicorp/go-plugin
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/fileutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/idtools
golang.org/x/net/context
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/promise
github.com/opencontainers/runc/libcontainer/user
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/ioutils
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/stdcopy
github.com/fsouza/go-dockerclient/external/github.com/hashicorp/go-cleanhttp
github.com/hashicorp/errwrap
github.com/hashicorp/go-multierror
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/homedir
github.com/hashicorp/go-version
github.com/gorhill/cronexpr
github.com/hashicorp/go-msgpack/codec
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/pools
github.com/fsouza/go-dockerclient/external/github.com/docker/docker/pkg/archive
github.com/hashicorp/nomad/helper/args
github.com/mitchellh/reflectwalk
github.com/mitchellh/copystructure
github.com/mitchellh/hashstructure
github.com/ugorji/go/codec
github.com/opencontainers/runc/libcontainer/configs
github.com/hashicorp/nomad/client/driver/structs
github.com/opencontainers/runc/libcontainer/cgroups
github.com/opencontainers/runc/libcontainer/system
github.com/fsouza/go-dockerclient
github.com/opencontainers/runc/libcontainer/utils
github.com/godbus/dbus
github.com/opencontainers/runc/libcontainer/cgroups/fs
github.com/coreos/go-systemd/util
github.com/hashicorp/serf/coordinate
github.com/hashicorp/consul/api
github.com/shirou/gopsutil/internal/common
github.com/shirou/gopsutil/cpu
github.com/coreos/go-systemd/dbus
github.com/shirou/gopsutil/host
github.com/shirou/gopsutil/mem
github.com/opencontainers/runc/libcontainer/cgroups/systemd
github.com/kardianos/osext
github.com/hashicorp/nomad/helper/discover
github.com/mitchellh/mapstructure
github.com/hashicorp/hcl/hcl/strconv
github.com/armon/go-radix
github.com/hashicorp/hcl/hcl/token
github.com/hashicorp/hcl/hcl/ast
github.com/hashicorp/hcl/hcl/scanner
github.com/hashicorp/hcl/json/token
github.com/bgentry/speakeasy
github.com/hashicorp/hcl/json/scanner
github.com/hashicorp/hcl/hcl/parser
github.com/hashicorp/hcl/json/parser
github.com/mattn/go-isatty
github.com/ryanuber/columnize
github.com/armon/go-metrics
github.com/mitchellh/cli
github.com/hashicorp/hcl
github.com/hashicorp/go-checkpoint
github.com/hashicorp/go-syslog
github.com/hashicorp/logutils
github.com/aws/aws-sdk-go/aws/awserr
github.com/go-ini/ini
github.com/aws/aws-sdk-go/aws/client/metadata
github.com/jmespath/go-jmespath
github.com/aws/aws-sdk-go/private/endpoints
github.com/hashicorp/go-getter/helper/url
github.com/hashicorp/consul/tlsutil
github.com/hashicorp/golang-lru/simplelru
github.com/hashicorp/go-immutable-radix
github.com/aws/aws-sdk-go/aws/credentials
github.com/hashicorp/go-memdb
github.com/aws/aws-sdk-go/aws
github.com/hashicorp/memberlist
github.com/aws/aws-sdk-go/aws/awsutil
github.com/hashicorp/net-rpc-msgpackrpc
github.com/aws/aws-sdk-go/aws/request
github.com/hashicorp/nomad/nomad/watch
github.com/hashicorp/raft
github.com/aws/aws-sdk-go/aws/client
github.com/aws/aws-sdk-go/aws/corehandlers
github.com/aws/aws-sdk-go/aws/ec2metadata
github.com/aws/aws-sdk-go/private/protocol
github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds
github.com/aws/aws-sdk-go/private/protocol/query/queryutil
github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil
github.com/aws/aws-sdk-go/aws/defaults
github.com/aws/aws-sdk-go/private/protocol/rest
github.com/aws/aws-sdk-go/private/protocol/query
github.com/aws/aws-sdk-go/aws/session
github.com/aws/aws-sdk-go/private/waiter
github.com/aws/aws-sdk-go/private/protocol/restxml
github.com/aws/aws-sdk-go/private/signer/v4
github.com/boltdb/bolt
github.com/hashicorp/serf/serf
github.com/aws/aws-sdk-go/service/s3
github.com/hashicorp/raft-boltdb
github.com/hashicorp/nomad/helper/flag-slice
github.com/hashicorp/nomad/helper/gated-writer
github.com/hashicorp/scada-client
github.com/hashicorp/nomad/client/testutil
github.com/hashicorp/go-getter
github.com/hashicorp/nomad/nomad/structs
github.com/hashicorp/nomad/client/config
github.com/hashicorp/nomad/client/allocdir
github.com/hashicorp/nomad/client/driver/env
github.com/hashicorp/nomad/jobspec
github.com/hashicorp/nomad/client/fingerprint
github.com/hashicorp/nomad/client/getter
github.com/hashicorp/nomad/nomad/state
github.com/hashicorp/nomad/client/driver/logging
github.com/hashicorp/nomad/scheduler
github.com/hashicorp/nomad/client/driver/executor
github.com/hashicorp/nomad/helper/testtask
github.com/hashicorp/nomad/nomad/mock
github.com/hashicorp/nomad/testutil
github.com/hashicorp/nomad/nomad
github.com/hashicorp/nomad/client/driver
github.com/hashicorp/nomad/command
github.com/hashicorp/nomad/client
github.com/hashicorp/nomad/command/agent
github.com/hashicorp/nomad
   debian/rules override_dh_auto_test
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_auto_test
	go test -v github.com/hashicorp/nomad github.com/hashicorp/nomad/api github.com/hashicorp/nomad/client github.com/hashicorp/nomad/client/allocdir github.com/hashicorp/nomad/client/config github.com/hashicorp/nomad/client/driver github.com/hashicorp/nomad/client/driver/env github.com/hashicorp/nomad/client/driver/executor github.com/hashicorp/nomad/client/driver/logging github.com/hashicorp/nomad/client/driver/structs github.com/hashicorp/nomad/client/fingerprint github.com/hashicorp/nomad/client/getter github.com/hashicorp/nomad/client/testutil github.com/hashicorp/nomad/command github.com/hashicorp/nomad/command/agent github.com/hashicorp/nomad/helper/args github.com/hashicorp/nomad/helper/discover github.com/hashicorp/nomad/helper/flag-slice github.com/hashicorp/nomad/helper/gated-writer github.com/hashicorp/nomad/helper/testtask github.com/hashicorp/nomad/jobspec github.com/hashicorp/nomad/nomad github.com/hashicorp/nomad/nomad/mock github.com/hashicorp/nomad/nomad/state github.com/hashicorp/nomad/nomad/structs github.com/hashicorp/nomad/nomad/watch github.com/hashicorp/nomad/scheduler github.com/hashicorp/nomad/testutil
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/nomad	0.152s
=== RUN   TestAgent_Self
=== RUN   TestAgent_NodeName
=== RUN   TestAgent_Datacenter
=== RUN   TestAgent_Join
=== RUN   TestAgent_Members
=== RUN   TestAgent_ForceLeave
=== RUN   TestAgent_SetServers
=== RUN   TestAgents_Sort
--- PASS: TestAgents_Sort (0.00s)
=== RUN   TestAllocations_List
=== RUN   TestAllocations_PrefixList
=== RUN   TestAllocations_CreateIndexSort
--- PASS: TestAllocations_CreateIndexSort (0.00s)
=== RUN   TestRequestTime
=== RUN   TestDefaultConfig_env
=== RUN   TestSetQueryOptions
=== RUN   TestSetWriteOptions
=== RUN   TestRequestToHTTP
=== RUN   TestParseQueryMeta
=== RUN   TestParseWriteMeta
=== RUN   TestQueryString
=== RUN   TestCompose
--- PASS: TestCompose (0.00s)
=== RUN   TestCompose_Constraints
--- PASS: TestCompose_Constraints (0.00s)
=== RUN   TestEvaluations_List
=== RUN   TestEvaluations_PrefixList
=== RUN   TestEvaluations_Info
=== RUN   TestEvaluations_Allocations
=== RUN   TestEvaluations_Sort
--- PASS: TestEvaluations_Sort (0.00s)
=== RUN   TestJobs_Register
=== RUN   TestJobs_Info
=== RUN   TestJobs_PrefixList
=== RUN   TestJobs_List
=== RUN   TestJobs_Allocations
=== RUN   TestJobs_Evaluations
=== RUN   TestJobs_Deregister
=== RUN   TestJobs_ForceEvaluate
=== RUN   TestJobs_PeriodicForce
=== RUN   TestJobs_NewBatchJob
--- PASS: TestJobs_NewBatchJob (0.00s)
=== RUN   TestJobs_NewServiceJob
--- PASS: TestJobs_NewServiceJob (0.00s)
=== RUN   TestJobs_SetMeta
--- PASS: TestJobs_SetMeta (0.00s)
=== RUN   TestJobs_Constrain
--- PASS: TestJobs_Constrain (0.00s)
=== RUN   TestJobs_Sort
--- PASS: TestJobs_Sort (0.00s)
=== RUN   TestNodes_List
=== RUN   TestNodes_PrefixList
=== RUN   TestNodes_Info
=== RUN   TestNodes_ToggleDrain
=== RUN   TestNodes_Allocations
=== RUN   TestNodes_ForceEvaluate
=== RUN   TestNodes_Sort
--- PASS: TestNodes_Sort (0.00s)
=== RUN   TestRegionsList
=== RUN   TestStatus_Leader
=== RUN   TestSystem_GarbageCollect
=== RUN   TestTaskGroup_NewTaskGroup
--- PASS: TestTaskGroup_NewTaskGroup (0.00s)
=== RUN   TestTaskGroup_Constrain
--- PASS: TestTaskGroup_Constrain (0.00s)
=== RUN   TestTaskGroup_SetMeta
--- PASS: TestTaskGroup_SetMeta (0.00s)
=== RUN   TestTaskGroup_AddTask
--- PASS: TestTaskGroup_AddTask (0.00s)
=== RUN   TestTask_NewTask
--- PASS: TestTask_NewTask (0.00s)
=== RUN   TestTask_SetConfig
--- PASS: TestTask_SetConfig (0.00s)
=== RUN   TestTask_SetMeta
--- PASS: TestTask_SetMeta (0.00s)
=== RUN   TestTask_Require
--- PASS: TestTask_Require (0.00s)
=== RUN   TestTask_Constrain
--- PASS: TestTask_Constrain (0.00s)
--- SKIP: TestAgent_Self (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_NodeName (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_Datacenter (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_Join (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_Members (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_ForceLeave (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAgent_SetServers (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAllocations_List (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAllocations_PrefixList (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- PASS: TestRequestTime (0.33s)
--- PASS: TestDefaultConfig_env (0.00s)
--- SKIP: TestSetQueryOptions (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestSetWriteOptions (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestRequestToHTTP (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- PASS: TestParseQueryMeta (0.00s)
--- PASS: TestParseWriteMeta (0.00s)
--- SKIP: TestQueryString (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestEvaluations_List (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestEvaluations_PrefixList (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestEvaluations_Info (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestEvaluations_Allocations (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_Register (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_Info (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_PrefixList (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_List (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_Allocations (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_Evaluations (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_Deregister (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_ForceEvaluate (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestJobs_PeriodicForce (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_List (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_PrefixList (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_Info (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_ToggleDrain (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_Allocations (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodes_ForceEvaluate (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestRegionsList (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestStatus_Leader (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestSystem_GarbageCollect (0.00s)
	server.go:107: nomad not found on $PATH, skipping
PASS
ok  	github.com/hashicorp/nomad/api	0.424s
=== RUN   TestAllocRunner_SimpleRun
--- SKIP: TestAllocRunner_SimpleRun (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestAllocRunner_TerminalUpdate_Destroy
--- SKIP: TestAllocRunner_TerminalUpdate_Destroy (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestAllocRunner_Destroy
--- SKIP: TestAllocRunner_Destroy (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestAllocRunner_Update
--- SKIP: TestAllocRunner_Update (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestAllocRunner_SaveRestoreState
--- SKIP: TestAllocRunner_SaveRestoreState (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestAllocRunner_SaveRestoreState_TerminalAlloc
--- SKIP: TestAllocRunner_SaveRestoreState_TerminalAlloc (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestClient_StartStop
2016/04/14 12:55:27 [INFO] client: using state directory /tmp/NomadClient760253538
2016/04/14 12:55:27 [INFO] client: using alloc directory /tmp/NomadClient294585433
2016/04/14 12:55:27 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:27 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:29 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:31 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:31 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:31 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:32 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:55:32 [DEBUG] driver.docker: using client connection initialized from environment
2016/04/14 12:55:32 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:32 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:32 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:32 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:32 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:32 [DEBUG] client: available drivers []
2016/04/14 12:55:32 [INFO] client: setting server address list: []
2016/04/14 12:55:32 [INFO] client: shutting down
--- PASS: TestClient_StartStop (4.10s)
=== RUN   TestClient_RPC
2016/04/14 12:55:32 [INFO] consul: shutting down consul service
2016/04/14 12:55:32 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:55:32 [INFO] serf: EventMemberJoin: Node 16001.global 127.0.0.1
2016/04/14 12:55:32 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:55:32 [INFO] client: using state directory /tmp/NomadClient509020644
2016/04/14 12:55:32 [INFO] client: using alloc directory /tmp/NomadClient021476339
2016/04/14 12:55:32 [INFO] raft: Node at 127.0.0.1:16001 [Leader] entering Leader state
2016/04/14 12:55:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:55:32 [DEBUG] raft: Node 127.0.0.1:16001 updated peer set (2): [127.0.0.1:16001]
2016/04/14 12:55:32 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:32 [INFO] nomad: cluster leadership acquired
2016/04/14 12:55:32 [INFO] nomad: adding server Node 16001.global (Addr: 127.0.0.1:16001) (DC: dc1)
2016/04/14 12:55:32 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:32 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:34 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:36 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:36 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:36 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:36 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:55:36 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:36 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:36 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:36 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:36 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:36 [DEBUG] client: available drivers []
2016/04/14 12:55:36 [INFO] client: setting server address list: [127.0.0.1:16001]
2016/04/14 12:55:36 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:36 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:55:36 [DEBUG] client: node registration complete
2016/04/14 12:55:36 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:55:36 [INFO] client: shutting down
2016/04/14 12:55:36 [INFO] nomad: shutting down server
2016/04/14 12:55:36 [WARN] serf: Shutdown without a Leave
2016/04/14 12:55:36 [DEBUG] client: state updated to ready
2016/04/14 12:55:36 [INFO] consul: shutting down consul service
2016/04/14 12:55:36 [ERR] client: failed to query for node allocations: rpc error: EOF
--- PASS: TestClient_RPC (4.07s)
=== RUN   TestClient_RPC_Passthrough
2016/04/14 12:55:36 [INFO] serf: EventMemberJoin: Node 16003.global 127.0.0.1
2016/04/14 12:55:36 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:55:36 [INFO] client: using state directory /tmp/NomadClient112808118
2016/04/14 12:55:36 [INFO] client: using alloc directory /tmp/NomadClient479838877
2016/04/14 12:55:36 [INFO] raft: Node at 127.0.0.1:16003 [Leader] entering Leader state
2016/04/14 12:55:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:55:36 [DEBUG] raft: Node 127.0.0.1:16003 updated peer set (2): [127.0.0.1:16003]
2016/04/14 12:55:36 [INFO] nomad: cluster leadership acquired
2016/04/14 12:55:36 [INFO] nomad: adding server Node 16003.global (Addr: 127.0.0.1:16003) (DC: dc1)
2016/04/14 12:55:36 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:36 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:38 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:40 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:40 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:40 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:40 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:55:40 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:40 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:40 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:40 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:40 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:40 [DEBUG] client: available drivers []
2016/04/14 12:55:40 [INFO] client: setting server address list: []
2016/04/14 12:55:40 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:55:40 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:55:40 [DEBUG] client: node registration complete
2016/04/14 12:55:40 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:40 [DEBUG] client: state updated to ready
2016/04/14 12:55:40 [INFO] client: shutting down
2016/04/14 12:55:40 [INFO] nomad: shutting down server
2016/04/14 12:55:40 [WARN] serf: Shutdown without a Leave
2016/04/14 12:55:40 [INFO] consul: shutting down consul service
--- PASS: TestClient_RPC_Passthrough (4.06s)
=== RUN   TestClient_Fingerprint
2016/04/14 12:55:40 [INFO] client: using state directory /tmp/NomadClient486377816
2016/04/14 12:55:40 [INFO] client: using alloc directory /tmp/NomadClient806222039
2016/04/14 12:55:40 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:40 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:42 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:44 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:44 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:44 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:44 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:55:44 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:44 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:44 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:44 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:44 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:44 [DEBUG] client: available drivers []
2016/04/14 12:55:44 [INFO] client: setting server address list: []
2016/04/14 12:55:44 [INFO] client: shutting down
--- PASS: TestClient_Fingerprint (4.04s)
=== RUN   TestClient_HasNodeChanged
2016/04/14 12:55:44 [INFO] client: using state directory /tmp/NomadClient171334218
2016/04/14 12:55:44 [INFO] client: using alloc directory /tmp/NomadClient022734113
2016/04/14 12:55:44 [INFO] consul: shutting down consul service
2016/04/14 12:55:44 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:55:44 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:44 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:44 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:46 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:48 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:48 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:48 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:48 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:55:48 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:48 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:48 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:48 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:48 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:48 [DEBUG] client: available drivers []
2016/04/14 12:55:48 [INFO] client: setting server address list: []
2016/04/14 12:55:48 [INFO] client: shutting down
--- PASS: TestClient_HasNodeChanged (4.04s)
=== RUN   TestClient_Fingerprint_InWhitelist
2016/04/14 12:55:48 [INFO] client: using state directory /tmp/NomadClient599535628
2016/04/14 12:55:48 [INFO] client: using alloc directory /tmp/NomadClient482941947
2016/04/14 12:55:48 [INFO] consul: shutting down consul service
2016/04/14 12:55:48 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:55:48 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:48 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:50 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:52 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:52 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:52 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:52 [DEBUG] client: applied fingerprints [arch cpu host memory network storage]
2016/04/14 12:55:52 [DEBUG] client: fingerprint modules skipped due to whitelist: [cgroup]
2016/04/14 12:55:52 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:52 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:52 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:52 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:52 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:52 [DEBUG] client: available drivers []
2016/04/14 12:55:52 [INFO] client: setting server address list: []
2016/04/14 12:55:52 [INFO] client: shutting down
--- FAIL: TestClient_Fingerprint_InWhitelist (4.04s)
	client_test.go:188: missing cpu fingerprint module
=== RUN   TestClient_Fingerprint_OutOfWhitelist
2016/04/14 12:55:52 [INFO] client: using state directory /tmp/NomadClient778312478
2016/04/14 12:55:52 [INFO] client: using alloc directory /tmp/NomadClient375584229
2016/04/14 12:55:52 [INFO] consul: shutting down consul service
2016/04/14 12:55:52 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:55:52 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:52 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:54 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:55:56 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:55:56 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:55:56 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:55:56 [DEBUG] client: applied fingerprints [arch host memory network storage]
2016/04/14 12:55:56 [DEBUG] client: fingerprint modules skipped due to whitelist: [cgroup cpu]
2016/04/14 12:55:56 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:55:56 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:55:56 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:55:56 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:55:56 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:55:56 [DEBUG] client: available drivers []
2016/04/14 12:55:56 [INFO] client: setting server address list: []
2016/04/14 12:55:56 [INFO] client: shutting down
--- PASS: TestClient_Fingerprint_OutOfWhitelist (4.03s)
=== RUN   TestClient_Drivers
2016/04/14 12:55:56 [INFO] client: using state directory /tmp/NomadClient877043200
2016/04/14 12:55:56 [INFO] client: using alloc directory /tmp/NomadClient336507743
2016/04/14 12:55:56 [INFO] consul: shutting down consul service
2016/04/14 12:55:56 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:55:56 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:55:56 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:55:56 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:55:58 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:00 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:00 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:00 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:00 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:00 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:00 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:56:00 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:00 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:56:00 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:56:00 [DEBUG] client: available drivers []
2016/04/14 12:56:00 [INFO] client: setting server address list: []
2016/04/14 12:56:00 [INFO] client: shutting down
--- FAIL: TestClient_Drivers (4.04s)
	client_test.go:214: missing exec driver
=== RUN   TestClient_Drivers_InWhitelist
2016/04/14 12:56:00 [INFO] client: using state directory /tmp/NomadClient239414066
2016/04/14 12:56:00 [INFO] client: using alloc directory /tmp/NomadClient171553513
2016/04/14 12:56:00 [INFO] consul: shutting down consul service
2016/04/14 12:56:00 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:56:00 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:00 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:00 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:02 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:04 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:04 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:04 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:04 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:04 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:04 [DEBUG] client: available drivers []
2016/04/14 12:56:04 [DEBUG] client: drivers skipped due to whitelist: [docker raw_exec java qemu rkt]
2016/04/14 12:56:04 [INFO] client: setting server address list: []
2016/04/14 12:56:04 [INFO] client: shutting down
--- FAIL: TestClient_Drivers_InWhitelist (4.03s)
	client_test.go:231: missing exec driver
=== RUN   TestClient_Drivers_OutOfWhitelist
2016/04/14 12:56:04 [INFO] consul: shutting down consul service
2016/04/14 12:56:04 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:56:04 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:04 [INFO] client: using state directory /tmp/NomadClient138023220
2016/04/14 12:56:04 [INFO] client: using alloc directory /tmp/NomadClient392578307
2016/04/14 12:56:04 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:04 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:06 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:08 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:08 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:08 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:08 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:08 [DEBUG] client: available drivers []
2016/04/14 12:56:08 [DEBUG] client: drivers skipped due to whitelist: [docker exec raw_exec java qemu rkt]
2016/04/14 12:56:08 [INFO] client: setting server address list: []
2016/04/14 12:56:08 [INFO] client: shutting down
--- PASS: TestClient_Drivers_OutOfWhitelist (4.04s)
=== RUN   TestClient_Register
2016/04/14 12:56:08 [INFO] consul: shutting down consul service
2016/04/14 12:56:08 [ERR] client: failed to query for node allocations: no known servers
2016/04/14 12:56:08 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:08 [INFO] serf: EventMemberJoin: Node 16005.global 127.0.0.1
2016/04/14 12:56:08 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:56:08 [INFO] raft: Node at 127.0.0.1:16005 [Leader] entering Leader state
2016/04/14 12:56:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:56:08 [DEBUG] raft: Node 127.0.0.1:16005 updated peer set (2): [127.0.0.1:16005]
2016/04/14 12:56:08 [INFO] nomad: cluster leadership acquired
2016/04/14 12:56:08 [INFO] nomad: adding server Node 16005.global (Addr: 127.0.0.1:16005) (DC: dc1)
2016/04/14 12:56:08 [INFO] client: using state directory /tmp/NomadClient455614086
2016/04/14 12:56:08 [INFO] client: using alloc directory /tmp/NomadClient034612781
2016/04/14 12:56:08 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:08 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:10 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:12 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:12 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:12 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:12 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:12 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:12 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:56:12 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:12 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:56:12 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:56:12 [DEBUG] client: available drivers []
2016/04/14 12:56:12 [INFO] client: setting server address list: []
2016/04/14 12:56:12 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:56:12 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:56:12 [DEBUG] client: node registration complete
2016/04/14 12:56:12 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:12 [DEBUG] client: state updated to ready
2016/04/14 12:56:12 [INFO] client: shutting down
2016/04/14 12:56:12 [INFO] nomad: shutting down server
2016/04/14 12:56:12 [WARN] serf: Shutdown without a Leave
2016/04/14 12:56:12 [INFO] consul: shutting down consul service
2016/04/14 12:56:12 [INFO] nomad: cluster leadership lost
--- PASS: TestClient_Register (4.11s)
=== RUN   TestClient_Heartbeat
2016/04/14 12:56:12 [INFO] raft: Node at 127.0.0.1:16007 [Leader] entering Leader state
2016/04/14 12:56:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:56:12 [DEBUG] raft: Node 127.0.0.1:16007 updated peer set (2): [127.0.0.1:16007]
2016/04/14 12:56:12 [INFO] serf: EventMemberJoin: Node 16007.global 127.0.0.1
2016/04/14 12:56:12 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:56:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:56:12 [INFO] nomad: adding server Node 16007.global (Addr: 127.0.0.1:16007) (DC: dc1)
2016/04/14 12:56:12 [INFO] client: using state directory /tmp/NomadClient161889704
2016/04/14 12:56:12 [INFO] client: using alloc directory /tmp/NomadClient599337703
2016/04/14 12:56:12 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:12 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:14 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:16 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:16 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:16 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:16 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:16 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:56:16 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:56:16 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:16 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:56:16 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:16 [DEBUG] client: available drivers []
2016/04/14 12:56:16 [INFO] client: setting server address list: []
2016/04/14 12:56:16 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:56:16 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:56:16 [DEBUG] client: node registration complete
2016/04/14 12:56:16 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:16 [DEBUG] client: state updated to ready
2016/04/14 12:56:16 [INFO] client: shutting down
2016/04/14 12:56:16 [INFO] nomad: shutting down server
2016/04/14 12:56:16 [WARN] serf: Shutdown without a Leave
2016/04/14 12:56:16 [INFO] consul: shutting down consul service
--- PASS: TestClient_Heartbeat (4.08s)
=== RUN   TestClient_UpdateAllocStatus
2016/04/14 12:56:16 [INFO] serf: EventMemberJoin: Node 16009.global 127.0.0.1
2016/04/14 12:56:16 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:56:16 [INFO] raft: Node at 127.0.0.1:16009 [Leader] entering Leader state
2016/04/14 12:56:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:56:16 [DEBUG] raft: Node 127.0.0.1:16009 updated peer set (2): [127.0.0.1:16009]
2016/04/14 12:56:16 [INFO] nomad: cluster leadership acquired
2016/04/14 12:56:16 [INFO] nomad: adding server Node 16009.global (Addr: 127.0.0.1:16009) (DC: dc1)
2016/04/14 12:56:16 [INFO] client: using state directory /tmp/NomadClient283846426
2016/04/14 12:56:16 [INFO] client: using alloc directory /tmp/NomadClient148278705
2016/04/14 12:56:16 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:16 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:18 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:20 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:20 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:20 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:20 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:20 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:20 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:56:20 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:20 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:56:20 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:56:20 [DEBUG] client: available drivers []
2016/04/14 12:56:20 [INFO] client: setting server address list: []
2016/04/14 12:56:20 [DEBUG] client: updated allocations at index 100 (pulled 1) (filtered 0)
2016/04/14 12:56:20 [DEBUG] client: allocs: (added 1) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:56:20 [DEBUG] client: node registration complete
2016/04/14 12:56:20 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:56:20 [DEBUG] client: 1 evaluations triggered by node update
2016/04/14 12:56:20 [DEBUG] client: state updated to ready
2016/04/14 12:56:20 [DEBUG] client: starting task runners for alloc '453a4154-d7e3-cc3a-1313-90e9627629d8'
2016/04/14 12:56:20 [DEBUG] client: starting task context for 'web' (alloc '453a4154-d7e3-cc3a-1313-90e9627629d8')
2016/04/14 12:56:20 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "executor", "/tmp/NomadClient148278705/453a4154-d7e3-cc3a-1313-90e9627629d8/web/web-executor.out"}
2016/04/14 12:56:20 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:20 [DEBUG] worker: dequeued evaluation 204b6fc8-a976-8d22-62e5-862bbbb92509
2016/04/14 12:56:20 [DEBUG] sched: <Eval '204b6fc8-a976-8d22-62e5-862bbbb92509' JobID: 'cba32e15-3f5d-0c95-ea6e-7ea772df64e4'>: allocs: (place 0) (update 0) (migrate 0) (stop 1) (ignore 0)
2016/04/14 12:56:20 [DEBUG] worker: submitted plan for evaluation 204b6fc8-a976-8d22-62e5-862bbbb92509
2016/04/14 12:56:20 [DEBUG] sched: <Eval '204b6fc8-a976-8d22-62e5-862bbbb92509' JobID: 'cba32e15-3f5d-0c95-ea6e-7ea772df64e4'>: setting status to complete
2016/04/14 12:56:20 [DEBUG] worker: updated evaluation <Eval '204b6fc8-a976-8d22-62e5-862bbbb92509' JobID: 'cba32e15-3f5d-0c95-ea6e-7ea772df64e4'>
2016/04/14 12:56:20 [DEBUG] worker: ack for evaluation 204b6fc8-a976-8d22-62e5-862bbbb92509
2016/04/14 12:56:20 [DEBUG] plugin: nomad: 2016/04/14 12:56:20 [DEBUG] plugin: plugin address: unix /tmp/plugin227804701
2016/04/14 12:56:20 [DEBUG] plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad: plugin process exited
2016/04/14 12:56:20 [ERR] client: failed to start task 'web' for alloc '453a4154-d7e3-cc3a-1313-90e9627629d8': error starting process via the plugin: Failed to mount shared directory for task web: operation not permitted
2016/04/14 12:56:20 [INFO] client: Not restarting task: web for alloc: 453a4154-d7e3-cc3a-1313-90e9627629d8 
2016/04/14 12:56:20 [INFO] client: shutting down
2016/04/14 12:56:20 [ERR] client: failed to destroy context for alloc '453a4154-d7e3-cc3a-1313-90e9627629d8': 1 error(s) occurred:

* 1 error(s) occurred:

* failed to unmount shared alloc dir "/tmp/NomadClient148278705/453a4154-d7e3-cc3a-1313-90e9627629d8/web/alloc": operation not permitted
2016/04/14 12:56:20 [DEBUG] client: terminating runner for alloc '453a4154-d7e3-cc3a-1313-90e9627629d8'
2016/04/14 12:56:20 [INFO] nomad: shutting down server
2016/04/14 12:56:20 [WARN] serf: Shutdown without a Leave
2016/04/14 12:56:20 [INFO] consul: shutting down consul service
2016/04/14 12:56:20 [INFO] nomad: cluster leadership lost
--- PASS: TestClient_UpdateAllocStatus (4.33s)
=== RUN   TestClient_WatchAllocs
--- SKIP: TestClient_WatchAllocs (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestClient_SaveRestoreState
--- SKIP: TestClient_SaveRestoreState (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestClient_Init
2016/04/14 12:56:20 [INFO] client: using state directory /tmp/NomadClient496455435
2016/04/14 12:56:20 [INFO] client: using alloc directory /tmp/nomad034712412/alloc
--- PASS: TestClient_Init (0.00s)
=== RUN   TestClient_SetServers
2016/04/14 12:56:20 [INFO] client: using state directory /tmp/NomadClient199162606
2016/04/14 12:56:20 [INFO] client: using alloc directory /tmp/NomadClient013481845
2016/04/14 12:56:20 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:56:20 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:56:22 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:56:24 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:56:24 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:56:24 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
2016/04/14 12:56:24 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:56:24 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:56:24 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:56:24 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:24 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:56:24 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:56:24 [DEBUG] client: available drivers []
2016/04/14 12:56:24 [INFO] client: setting server address list: []
2016/04/14 12:56:24 [INFO] client: setting server address list: []
2016/04/14 12:56:24 [INFO] client: setting server address list: [foo:4647]
2016/04/14 12:56:24 [INFO] client: setting server address list: [foo:5445 bar:8080]
2016/04/14 12:56:24 [INFO] client: setting server address list: [bar:8080]
2016/04/14 12:56:24 [INFO] client: setting server address list: [baz:9090 zip:4545]
2016/04/14 12:56:24 [WARN] client: port not specified, using default port
2016/04/14 12:56:24 [WARN] client: port not specified, using default port
2016/04/14 12:56:24 [WARN] client: port not specified, using default port
2016/04/14 12:56:24 [INFO] client: setting server address list: [foo:4647 bar:4647 baz:4647]
--- PASS: TestClient_SetServers (4.04s)
=== RUN   TestConsul_MakeChecks
=== RUN   TestConsul_InvalidPortLabelForService
=== RUN   TestConsul_Services_Deleted_From_Task
=== RUN   TestConsul_Service_Should_Be_Re_Reregistered_On_Change
=== RUN   TestConsul_AddCheck_To_Service
=== RUN   TestConsul_ModifyCheck
=== RUN   TestConsul_FilterNomadServicesAndChecks
=== RUN   TestClient_RestartTracker_ModeDelay
=== RUN   TestClient_RestartTracker_ModeFail
=== RUN   TestClient_RestartTracker_NoRestartOnSuccess
=== RUN   TestClient_RestartTracker_ZeroAttempts
=== RUN   TestClient_RestartTracker_StartError_Recoverable
=== RUN   TestTaskRunner_SimpleRun
--- SKIP: TestTaskRunner_SimpleRun (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestTaskRunner_Destroy
--- SKIP: TestTaskRunner_Destroy (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestTaskRunner_Update
--- SKIP: TestTaskRunner_Update (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestTaskRunner_SaveRestoreState
--- SKIP: TestTaskRunner_SaveRestoreState (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestTaskRunner_Download_List
2016/04/14 12:56:25 [DEBUG] client: periodically checking for node changes at duration 5s
--- SKIP: TestTaskRunner_Download_List (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestTaskRunner_Download_Retries
--- SKIP: TestTaskRunner_Download_Retries (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestDiffAllocs
=== RUN   TestRandomStagger
=== RUN   TestShuffleStrings
=== RUN   TestPersistRestoreState
--- PASS: TestConsul_MakeChecks (0.00s)
--- PASS: TestConsul_InvalidPortLabelForService (0.00s)
logger: consul.go:165: [INFO] consul: registering service example-cache-redis with consul.
--- PASS: TestConsul_Services_Deleted_From_Task (0.00s)
logger: consul.go:165: [INFO] consul: registering service example-cache-redis with consul.
--- PASS: TestConsul_Service_Should_Be_Re_Reregistered_On_Change (0.00s)
logger: consul.go:165: [INFO] consul: registering service example-cache-redis with consul.
--- PASS: TestConsul_AddCheck_To_Service (0.00s)
logger: consul.go:165: [INFO] consul: registering service example-cache-redis with consul.
--- PASS: TestConsul_ModifyCheck (0.00s)
--- PASS: TestConsul_FilterNomadServicesAndChecks (0.00s)
--- PASS: TestClient_RestartTracker_ModeDelay (0.00s)
--- PASS: TestClient_RestartTracker_ModeFail (0.00s)
--- PASS: TestClient_RestartTracker_NoRestartOnSuccess (0.00s)
--- PASS: TestClient_RestartTracker_ZeroAttempts (0.00s)
--- PASS: TestClient_RestartTracker_StartError_Recoverable (0.00s)
--- PASS: TestDiffAllocs (0.00s)
--- PASS: TestRandomStagger (0.00s)
--- PASS: TestShuffleStrings (0.00s)
--- PASS: TestPersistRestoreState (0.00s)
FAIL
FAIL	github.com/hashicorp/nomad/client	57.201s
=== RUN   TestAllocDir_BuildAlloc
--- PASS: TestAllocDir_BuildAlloc (0.01s)
=== RUN   TestAllocDir_LogDir
--- PASS: TestAllocDir_LogDir (0.00s)
=== RUN   TestAllocDir_EmbedNonExistent
--- PASS: TestAllocDir_EmbedNonExistent (0.01s)
=== RUN   TestAllocDir_EmbedDirs
--- PASS: TestAllocDir_EmbedDirs (0.01s)
=== RUN   TestAllocDir_MountSharedAlloc
--- SKIP: TestAllocDir_MountSharedAlloc (0.00s)
	driver_compatible.go:51: Must be root to run test
PASS
ok  	github.com/hashicorp/nomad/client/allocdir	0.151s
=== RUN   TestConfigRead
--- PASS: TestConfigRead (0.00s)
=== RUN   TestConfigReadDefault
--- PASS: TestConfigReadDefault (0.00s)
PASS
ok  	github.com/hashicorp/nomad/client/config	0.057s
=== RUN   TestDockerDriver_Handle
=== RUN   TestDockerDriver_Fingerprint
=== RUN   TestDockerDriver_StartOpen_Wait
=== RUN   TestDockerDriver_Start_Wait
=== RUN   TestDockerDriver_Start_Wait_AllocDir
=== RUN   TestDockerDriver_Start_Kill_Wait
=== RUN   TestDocker_StartN
=== RUN   TestDocker_StartNVersions
=== RUN   TestDockerHostNet
=== RUN   TestDockerLabels
=== RUN   TestDockerDNS
=== RUN   TestDockerPortsNoMap
=== RUN   TestDockerPortsMapping
=== RUN   TestDriver_GetTaskEnv
=== RUN   TestMapMergeStrInt
=== RUN   TestMapMergeStrStr
=== RUN   TestExecDriver_Fingerprint
=== RUN   TestExecDriver_StartOpen_Wait
=== RUN   TestExecDriver_KillUserPid_OnPluginReconnectFailure
=== RUN   TestExecDriver_Start_Wait
=== RUN   TestExecDriver_Start_Wait_AllocDir
=== RUN   TestExecDriver_Start_Kill_Wait
=== RUN   TestJavaDriver_Fingerprint
=== RUN   TestJavaDriver_StartOpen_Wait
=== RUN   TestJavaDriver_Start_Wait
=== RUN   TestJavaDriver_Start_Kill_Wait
=== RUN   TestQemuDriver_Fingerprint
=== RUN   TestQemuDriver_StartOpen_Wait
=== RUN   TestRawExecDriver_Fingerprint
=== RUN   TestRawExecDriver_StartOpen_Wait
=== RUN   TestRawExecDriver_Start_Wait
=== RUN   TestRawExecDriver_Start_Wait_AllocDir
=== RUN   TestRawExecDriver_Start_Kill_Wait
=== RUN   TestRktVersionRegex
--- PASS: TestRktVersionRegex (0.00s)
=== RUN   TestRktDriver_Fingerprint
--- SKIP: TestRktDriver_Fingerprint (0.00s)
	driver_compatible.go:36: Must be root on non-windows environments to run test
=== RUN   TestRktDriver_Start_DNS
--- SKIP: TestRktDriver_Start_DNS (0.00s)
	driver_compatible.go:36: Must be root on non-windows environments to run test
=== RUN   TestRktDriver_Start_Wait
--- SKIP: TestRktDriver_Start_Wait (0.00s)
	driver_compatible.go:36: Must be root on non-windows environments to run test
=== RUN   TestRktDriver_Start_Wait_Skip_Trust
--- SKIP: TestRktDriver_Start_Wait_Skip_Trust (0.00s)
	driver_compatible.go:36: Must be root on non-windows environments to run test
=== RUN   TestRktDriver_Start_Wait_AllocDir
--- SKIP: TestRktDriver_Start_Wait_AllocDir (0.00s)
	driver_compatible.go:36: Must be root on non-windows environments to run test
=== RUN   TestDriver_KillTimeout
--- PASS: TestDriver_KillTimeout (0.00s)
2016/04/14 12:56:01 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "syslog", "/tmp/292541627"}
2016/04/14 12:56:01 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:02 [DEBUG] plugin: nomad: 2016/04/14 12:56:02 [DEBUG] plugin: plugin address: unix /tmp/plugin086474748
2016/04/14 12:56:02 [DEBUG] plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad: plugin process exited
--- PASS: TestDockerDriver_Handle (0.15s)
2016/04/14 12:56:02 [DEBUG] driver.docker: using client connection initialized from environment
2016/04/14 12:56:02 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:56:02 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- PASS: TestDockerDriver_Fingerprint (0.01s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
	docker_test.go:190: Docker daemon not available. The remainder of the docker tests will be skipped.
	docker_test.go:192: Found docker version 
--- SKIP: TestDockerDriver_StartOpen_Wait (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerDriver_Start_Wait (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerDriver_Start_Wait_AllocDir (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerDriver_Start_Kill_Wait (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDocker_StartN (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDocker_StartNVersions (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerHostNet (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerLabels (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerDNS (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerPortsNoMap (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- SKIP: TestDockerPortsMapping (0.00s)
	docker_test.go:36: Failed to connect to docker daemon: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
--- PASS: TestDriver_GetTaskEnv (0.00s)
--- PASS: TestMapMergeStrInt (0.00s)
--- PASS: TestMapMergeStrStr (0.00s)
--- SKIP: TestExecDriver_Fingerprint (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestExecDriver_StartOpen_Wait (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestExecDriver_KillUserPid_OnPluginReconnectFailure (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestExecDriver_Start_Wait (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestExecDriver_Start_Wait_AllocDir (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestExecDriver_Start_Kill_Wait (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
--- SKIP: TestJavaDriver_Fingerprint (0.00s)
	driver_compatible.go:18: Test only available when running as root on linux
--- SKIP: TestJavaDriver_StartOpen_Wait (0.00s)
	java_test.go:54: Java not found; skipping
--- SKIP: TestJavaDriver_Start_Wait (0.00s)
	java_test.go:107: Java not found; skipping
--- SKIP: TestJavaDriver_Start_Kill_Wait (0.00s)
	java_test.go:169: Java not found; skipping
--- SKIP: TestQemuDriver_Fingerprint (0.00s)
	driver_compatible.go:30: Must have Qemu installed for Qemu specific tests to run
--- SKIP: TestQemuDriver_StartOpen_Wait (0.00s)
	driver_compatible.go:30: Must have Qemu installed for Qemu specific tests to run
2016/04/14 12:56:02 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
--- PASS: TestRawExecDriver_Fingerprint (0.00s)
2016/04/14 12:56:02 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "executor", "/tmp/7e79c0c8-96a1-02c5-1975-6900b63e0c2e/sleep/sleep-executor.out"}
2016/04/14 12:56:02 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:02 [DEBUG] plugin: nomad: 2016/04/14 12:56:02 [DEBUG] plugin: plugin address: unix /tmp/plugin208841706
2016/04/14 12:56:02 [DEBUG] driver.raw_exec: started process with pid: 22574
2016/04/14 12:56:03 [DEBUG] plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad: plugin process exited
--- PASS: TestRawExecDriver_StartOpen_Wait (1.20s)
2016/04/14 12:56:03 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "executor", "/tmp/a005cbbb-70c6-7571-f695-fc0424defc8d/sleep/sleep-executor.out"}
2016/04/14 12:56:03 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:03 [DEBUG] plugin: nomad: 2016/04/14 12:56:03 [DEBUG] plugin: plugin address: unix /tmp/plugin034287057
2016/04/14 12:56:03 [DEBUG] driver.raw_exec: started process with pid: 22595
2016/04/14 12:56:04 [DEBUG] plugin: reattached plugin process exited
--- PASS: TestRawExecDriver_Start_Wait (1.17s)
2016/04/14 12:56:04 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "executor", "/tmp/a42106e4-d12f-cf13-335d-e970c9328556/sleep/sleep-executor.out"}
2016/04/14 12:56:04 [DEBUG] plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad: plugin process exited
2016/04/14 12:56:04 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:04 [DEBUG] plugin: nomad: 2016/04/14 12:56:04 [DEBUG] plugin: plugin address: unix /tmp/plugin398027212
2016/04/14 12:56:04 [DEBUG] driver.raw_exec: started process with pid: 22619
--- PASS: TestRawExecDriver_Start_Wait_AllocDir (1.21s)
2016/04/14 12:56:05 [DEBUG] plugin: starting plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad []string{"/<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad", "executor", "/tmp/22bade9c-977c-ef73-a029-bc834f37bfdc/sleep/sleep-executor.out"}
2016/04/14 12:56:05 [DEBUG] plugin: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad: plugin process exited
2016/04/14 12:56:05 [DEBUG] plugin: waiting for RPC address for: /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/bin/nomad
2016/04/14 12:56:05 [DEBUG] plugin: nomad: 2016/04/14 12:56:05 [DEBUG] plugin: plugin address: unix /tmp/plugin965855912
2016/04/14 12:56:05 [DEBUG] driver.raw_exec: started process with pid: 22644
--- PASS: TestRawExecDriver_Start_Kill_Wait (1.14s)
PASS
ok  	github.com/hashicorp/nomad/client/driver	4.989s
=== RUN   TestEnvironment_ParseAndReplace_Env
--- PASS: TestEnvironment_ParseAndReplace_Env (0.00s)
=== RUN   TestEnvironment_ParseAndReplace_Meta
--- PASS: TestEnvironment_ParseAndReplace_Meta (0.00s)
=== RUN   TestEnvironment_ParseAndReplace_Attr
--- PASS: TestEnvironment_ParseAndReplace_Attr (0.00s)
=== RUN   TestEnvironment_ParseAndReplace_Node
--- PASS: TestEnvironment_ParseAndReplace_Node (0.00s)
=== RUN   TestEnvironment_ParseAndReplace_Mixed
--- PASS: TestEnvironment_ParseAndReplace_Mixed (0.00s)
=== RUN   TestEnvironment_ReplaceEnv_Mixed
--- PASS: TestEnvironment_ReplaceEnv_Mixed (0.00s)
=== RUN   TestEnvironment_AsList
--- PASS: TestEnvironment_AsList (0.00s)
=== RUN   TestEnvironment_ClearEnvvars
--- PASS: TestEnvironment_ClearEnvvars (0.00s)
=== RUN   TestEnvironment_Interprolate
--- PASS: TestEnvironment_Interprolate (0.00s)
PASS
ok  	github.com/hashicorp/nomad/client/driver/env	0.060s
=== RUN   TestExecutor_Start_Invalid
2016/04/14 12:55:53 [DEBUG] executor: launching command /bin/foobar 1
--- PASS: TestExecutor_Start_Invalid (0.01s)
=== RUN   TestExecutor_Start_Wait_Failure_Code
2016/04/14 12:55:53 [DEBUG] executor: launching command /bin/sleep fail
--- PASS: TestExecutor_Start_Wait_Failure_Code (0.35s)
=== RUN   TestExecutor_Start_Wait
2016/04/14 12:55:53 [DEBUG] executor: launching command /bin/echo hello world
--- PASS: TestExecutor_Start_Wait (0.03s)
=== RUN   TestExecutor_IsolationAndConstraints
--- SKIP: TestExecutor_IsolationAndConstraints (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestExecutor_DestroyCgroup
--- SKIP: TestExecutor_DestroyCgroup (0.00s)
	driver_compatible.go:12: Test only available running as root on linux
=== RUN   TestExecutor_Start_Kill
2016/04/14 12:55:53 [DEBUG] executor: launching command /bin/sleep 10 && hello world
--- PASS: TestExecutor_Start_Kill (2.02s)
PASS
ok  	github.com/hashicorp/nomad/client/driver/executor	2.470s
=== RUN   TestFileRotator_IncorrectPath
--- PASS: TestFileRotator_IncorrectPath (0.00s)
=== RUN   TestFileRotator_CreateNewFile
--- PASS: TestFileRotator_CreateNewFile (0.00s)
=== RUN   TestFileRotator_OpenLastFile
--- PASS: TestFileRotator_OpenLastFile (0.00s)
=== RUN   TestFileRotator_WriteToCurrentFile
--- PASS: TestFileRotator_WriteToCurrentFile (0.20s)
=== RUN   TestFileRotator_RotateFiles
--- PASS: TestFileRotator_RotateFiles (0.20s)
=== RUN   TestFileRotator_WriteRemaining
--- PASS: TestFileRotator_WriteRemaining (0.20s)
=== RUN   TestFileRotator_PurgeOldFiles
--- PASS: TestFileRotator_PurgeOldFiles (1.01s)
=== RUN   TestLogParser_Priority
--- PASS: TestLogParser_Priority (0.00s)
PASS
ok  	github.com/hashicorp/nomad/client/driver/logging	1.671s
?   	github.com/hashicorp/nomad/client/driver/structs	[no test files]
=== RUN   TestArchFingerprint
--- PASS: TestArchFingerprint (0.00s)
=== RUN   TestCGroupFingerprint
2016/04/14 12:56:28 [INFO] fingerprint.cgroups: cgroups are available
--- PASS: TestCGroupFingerprint (0.00s)
=== RUN   TestConsulFingerprint
2016/04/14 12:56:28 [INFO] fingerprint.consul: consul agent is available
--- PASS: TestConsulFingerprint (0.01s)
=== RUN   TestCPUFingerprint
--- FAIL: TestCPUFingerprint (0.00s)
	cpu_test.go:32: Missing CPU Frequency
=== RUN   TestEnvAWSFingerprint_nonAws
--- PASS: TestEnvAWSFingerprint_nonAws (0.48s)
=== RUN   TestEnvAWSFingerprint_aws
--- PASS: TestEnvAWSFingerprint_aws (0.04s)
=== RUN   TestNetworkFingerprint_AWS
--- PASS: TestNetworkFingerprint_AWS (0.03s)
=== RUN   TestNetworkFingerprint_notAWS
--- PASS: TestNetworkFingerprint_notAWS (0.01s)
=== RUN   TestGCEFingerprint_nonGCE
2016/04/14 12:56:28 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
--- PASS: TestGCEFingerprint_nonGCE (0.01s)
=== RUN   TestFingerprint_GCEWithExternalIp
--- PASS: TestFingerprint_GCEWithExternalIp (0.05s)
=== RUN   TestFingerprint_GCEWithoutExternalIp
--- PASS: TestFingerprint_GCEWithoutExternalIp (0.03s)
=== RUN   TestHostFingerprint
--- PASS: TestHostFingerprint (0.01s)
=== RUN   TestMemoryFingerprint
--- PASS: TestMemoryFingerprint (0.00s)
=== RUN   TestNetworkFingerprint_basic
2016/04/14 12:56:28 [DEBUG] fingerprint.network: Detected interface eth0  with IP 172.17.2.3 during fingerprinting
--- PASS: TestNetworkFingerprint_basic (0.00s)
=== RUN   TestNetworkFingerprint_no_devices
--- PASS: TestNetworkFingerprint_no_devices (0.00s)
=== RUN   TestNetworkFingerprint_default_device_absent
--- PASS: TestNetworkFingerprint_default_device_absent (0.00s)
=== RUN   TestNetworkFingerPrint_default_device
2016/04/14 12:56:28 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.0 during fingerprinting
2016/04/14 12:56:28 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:56:28 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
--- PASS: TestNetworkFingerPrint_default_device (0.00s)
=== RUN   TestNetworkFingerPrint_excludelo_down_interfaces
2016/04/14 12:56:28 [DEBUG] fingerprint.network: Detected interface eth0  with IP 100.64.0.0 during fingerprinting
--- PASS: TestNetworkFingerPrint_excludelo_down_interfaces (0.00s)
=== RUN   TestStorageFingerprint
--- PASS: TestStorageFingerprint (0.01s)
FAIL
FAIL	github.com/hashicorp/nomad/client/fingerprint	0.780s
=== RUN   TestGetArtifact_FileAndChecksum
--- FAIL: TestGetArtifact_FileAndChecksum (0.01s)
	getter_test.go:42: GetArtifact failed: bad response code: 404
=== RUN   TestGetArtifact_InvalidChecksum
--- PASS: TestGetArtifact_InvalidChecksum (0.00s)
=== RUN   TestGetArtifact_Archive
--- FAIL: TestGetArtifact_Archive (0.01s)
	getter_test.go:141: GetArtifact failed: bad response code: 404
FAIL
FAIL	github.com/hashicorp/nomad/client/getter	0.079s
?   	github.com/hashicorp/nomad/client/testutil	[no test files]
=== RUN   TestAgentInfoCommand_Implements
--- PASS: TestAgentInfoCommand_Implements (0.00s)
=== RUN   TestAgentInfoCommand_Run
=== RUN   TestAgentInfoCommand_Fails
--- PASS: TestAgentInfoCommand_Fails (0.00s)
=== RUN   TestAllocStatusCommand_Implements
--- PASS: TestAllocStatusCommand_Implements (0.00s)
=== RUN   TestAllocStatusCommand_Fails
=== RUN   TestClientConfigCommand_Implements
--- PASS: TestClientConfigCommand_Implements (0.00s)
=== RUN   TestClientConfigCommand_UpdateServers
=== RUN   TestClientConfigCommand_Fails
--- PASS: TestClientConfigCommand_Fails (0.00s)
=== RUN   TestEvalMonitorCommand_Implements
--- PASS: TestEvalMonitorCommand_Implements (0.00s)
=== RUN   TestEvalMonitorCommand_Fails
=== RUN   TestHelpers_FormatKV
--- PASS: TestHelpers_FormatKV (0.00s)
=== RUN   TestHelpers_FormatList
--- PASS: TestHelpers_FormatList (0.00s)
=== RUN   TestInitCommand_Implements
--- PASS: TestInitCommand_Implements (0.00s)
=== RUN   TestInitCommand_Run
--- PASS: TestInitCommand_Run (0.00s)
=== RUN   TestMeta_FlagSet
--- PASS: TestMeta_FlagSet (0.00s)
=== RUN   TestMonitor_Update_Eval
--- PASS: TestMonitor_Update_Eval (0.00s)
=== RUN   TestMonitor_Update_Allocs
--- PASS: TestMonitor_Update_Allocs (0.00s)
=== RUN   TestMonitor_Update_SchedulingFailure
--- PASS: TestMonitor_Update_SchedulingFailure (0.00s)
=== RUN   TestMonitor_Update_AllocModification
--- PASS: TestMonitor_Update_AllocModification (0.00s)
=== RUN   TestMonitor_Monitor
=== RUN   TestMonitor_MonitorWithPrefix
=== RUN   TestMonitor_DumpAllocStatus
--- PASS: TestMonitor_DumpAllocStatus (0.00s)
=== RUN   TestNodeDrainCommand_Implements
--- PASS: TestNodeDrainCommand_Implements (0.00s)
=== RUN   TestNodeDrainCommand_Fails
=== RUN   TestNodeStatusCommand_Implements
--- PASS: TestNodeStatusCommand_Implements (0.00s)
=== RUN   TestNodeStatusCommand_Run
=== RUN   TestNodeStatusCommand_Fails
=== RUN   TestRunCommand_Implements
--- PASS: TestRunCommand_Implements (0.00s)
=== RUN   TestRunCommand_Fails
--- PASS: TestRunCommand_Fails (0.02s)
=== RUN   TestServerForceLeaveCommand_Implements
--- PASS: TestServerForceLeaveCommand_Implements (0.00s)
=== RUN   TestServerJoinCommand_Implements
--- PASS: TestServerJoinCommand_Implements (0.00s)
=== RUN   TestServerMembersCommand_Implements
--- PASS: TestServerMembersCommand_Implements (0.00s)
=== RUN   TestServerMembersCommand_Run
=== RUN   TestMembersCommand_Fails
--- PASS: TestMembersCommand_Fails (0.00s)
=== RUN   TestStatusCommand_Implements
--- PASS: TestStatusCommand_Implements (0.00s)
=== RUN   TestStatusCommand_Run
=== RUN   TestStatusCommand_Fails
--- PASS: TestStatusCommand_Fails (0.00s)
=== RUN   TestStopCommand_Implements
--- PASS: TestStopCommand_Implements (0.00s)
=== RUN   TestStopCommand_Fails
=== RUN   TestValidateCommand_Implements
--- PASS: TestValidateCommand_Implements (0.00s)
=== RUN   TestValidateCommand
--- PASS: TestValidateCommand (0.01s)
=== RUN   TestValidateCommand_Fails
--- PASS: TestValidateCommand_Fails (0.00s)
=== RUN   TestVersionCommand_implements
--- PASS: TestVersionCommand_implements (0.00s)
--- SKIP: TestAgentInfoCommand_Run (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestAllocStatusCommand_Fails (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestClientConfigCommand_UpdateServers (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestEvalMonitorCommand_Fails (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestMonitor_Monitor (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestMonitor_MonitorWithPrefix (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodeDrainCommand_Fails (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodeStatusCommand_Run (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestNodeStatusCommand_Fails (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestServerMembersCommand_Run (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestStatusCommand_Run (0.00s)
	server.go:107: nomad not found on $PATH, skipping
--- SKIP: TestStopCommand_Fails (0.00s)
	server.go:107: nomad not found on $PATH, skipping
PASS
ok  	github.com/hashicorp/nomad/command	0.136s
=== RUN   TestHTTP_AgentSelf
2016/04/14 12:57:32 [INFO] serf: EventMemberJoin: Node 17002.global 127.0.0.1
2016/04/14 12:57:32 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:57:32 [INFO] client: using state directory /tmp/NomadClient474490146
2016/04/14 12:57:32 [INFO] client: using alloc directory /tmp/NomadClient188113433
2016/04/14 12:57:32 [INFO] raft: Node at 127.0.0.1:17002 [Leader] entering Leader state
2016/04/14 12:57:32 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:32 [DEBUG] raft: Node 127.0.0.1:17002 updated peer set (2): [127.0.0.1:17002]
2016/04/14 12:57:32 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:32 [INFO] nomad: adding server Node 17002.global (Addr: 127.0.0.1:17002) (DC: dc1)
2016/04/14 12:57:32 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:32 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:34 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:36 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:36 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:37 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:37 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:37 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:37 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:37 [DEBUG] driver.docker: using client connection initialized from environment
2016/04/14 12:57:37 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:37 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:37 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:37 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:37 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:37 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:37 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:37 [INFO] client: setting server address list: []
2016/04/14 12:57:37 [DEBUG] client: node registration complete
2016/04/14 12:57:37 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:57:37 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:57:37 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:37 [DEBUG] client: state updated to ready
2016/04/14 12:57:37 [DEBUG] http: Shutting down http server
2016/04/14 12:57:37 [INFO] agent: requesting shutdown
2016/04/14 12:57:37 [INFO] client: shutting down
2016/04/14 12:57:37 [INFO] nomad: shutting down server
2016/04/14 12:57:37 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:37 [INFO] consul: shutting down consul service
2016/04/14 12:57:37 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AgentSelf (4.07s)
=== RUN   TestHTTP_AgentJoin
2016/04/14 12:57:37 [INFO] serf: EventMemberJoin: Node 17005.global 127.0.0.1
2016/04/14 12:57:37 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:57:37 [INFO] client: using state directory /tmp/NomadClient991922611
2016/04/14 12:57:37 [INFO] client: using alloc directory /tmp/NomadClient288650614
2016/04/14 12:57:37 [INFO] raft: Node at 127.0.0.1:17005 [Leader] entering Leader state
2016/04/14 12:57:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:37 [DEBUG] raft: Node 127.0.0.1:17005 updated peer set (2): [127.0.0.1:17005]
2016/04/14 12:57:37 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:37 [INFO] nomad: adding server Node 17005.global (Addr: 127.0.0.1:17005) (DC: dc1)
2016/04/14 12:57:37 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:37 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:39 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:41 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:41 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:41 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:41 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:41 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:41 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:41 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:41 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:41 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:41 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:41 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:41 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:41 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:41 [INFO] client: setting server address list: []
2016/04/14 12:57:41 [DEBUG] client: node registration complete
2016/04/14 12:57:41 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:57:41 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:57:41 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:41 [DEBUG] client: state updated to ready
2016/04/14 12:57:41 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:17006
2016/04/14 12:57:41 [DEBUG] memberlist: TCP connection from=127.0.0.1:54010
2016/04/14 12:57:41 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:17006
2016/04/14 12:57:41 [DEBUG] memberlist: TCP connection from=127.0.0.1:54011
2016/04/14 12:57:41 [DEBUG] http: Shutting down http server
2016/04/14 12:57:41 [INFO] agent: requesting shutdown
2016/04/14 12:57:41 [INFO] client: shutting down
2016/04/14 12:57:41 [INFO] nomad: shutting down server
2016/04/14 12:57:41 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:41 [INFO] consul: shutting down consul service
2016/04/14 12:57:41 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AgentJoin (4.07s)
=== RUN   TestHTTP_AgentMembers
2016/04/14 12:57:41 [INFO] serf: EventMemberJoin: Node 17008.global 127.0.0.1
2016/04/14 12:57:41 [INFO] nomad: starting 4 scheduling worker(s) for [system service batch _core]
2016/04/14 12:57:41 [INFO] client: using state directory /tmp/NomadClient478597144
2016/04/14 12:57:41 [INFO] client: using alloc directory /tmp/NomadClient095991447
2016/04/14 12:57:41 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:41 [INFO] raft: Node at 127.0.0.1:17008 [Leader] entering Leader state
2016/04/14 12:57:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:41 [DEBUG] raft: Node 127.0.0.1:17008 updated peer set (2): [127.0.0.1:17008]
2016/04/14 12:57:41 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:41 [INFO] nomad: adding server Node 17008.global (Addr: 127.0.0.1:17008) (DC: dc1)
2016/04/14 12:57:41 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:43 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:45 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:45 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:45 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:45 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:45 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:45 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:45 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:45 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:45 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:45 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:45 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:45 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:45 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:45 [INFO] client: setting server address list: []
2016/04/14 12:57:45 [DEBUG] client: node registration complete
2016/04/14 12:57:45 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:57:45 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:57:45 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:45 [DEBUG] client: state updated to ready
2016/04/14 12:57:45 [DEBUG] http: Shutting down http server
2016/04/14 12:57:45 [INFO] agent: requesting shutdown
2016/04/14 12:57:45 [INFO] client: shutting down
2016/04/14 12:57:45 [INFO] nomad: shutting down server
2016/04/14 12:57:45 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:45 [INFO] consul: shutting down consul service
2016/04/14 12:57:45 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AgentMembers (4.06s)
=== RUN   TestHTTP_AgentForceLeave
2016/04/14 12:57:45 [INFO] serf: EventMemberJoin: Node 17011.global 127.0.0.1
2016/04/14 12:57:45 [INFO] nomad: starting 4 scheduling worker(s) for [system service batch _core]
2016/04/14 12:57:45 [INFO] client: using state directory /tmp/NomadClient866269921
2016/04/14 12:57:45 [INFO] client: using alloc directory /tmp/NomadClient893207756
2016/04/14 12:57:45 [INFO] raft: Node at 127.0.0.1:17011 [Leader] entering Leader state
2016/04/14 12:57:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:45 [DEBUG] raft: Node 127.0.0.1:17011 updated peer set (2): [127.0.0.1:17011]
2016/04/14 12:57:45 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:45 [INFO] nomad: adding server Node 17011.global (Addr: 127.0.0.1:17011) (DC: dc1)
2016/04/14 12:57:45 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:45 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:47 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:49 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:49 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:49 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:49 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:49 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:49 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:49 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:49 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:49 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:49 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:49 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:49 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:49 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:49 [INFO] client: setting server address list: []
2016/04/14 12:57:49 [DEBUG] client: node registration complete
2016/04/14 12:57:49 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:57:49 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:57:49 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:49 [DEBUG] client: state updated to ready
2016/04/14 12:57:49 [DEBUG] http: Shutting down http server
2016/04/14 12:57:49 [INFO] agent: requesting shutdown
2016/04/14 12:57:49 [INFO] client: shutting down
2016/04/14 12:57:49 [INFO] nomad: shutting down server
2016/04/14 12:57:49 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:49 [INFO] consul: shutting down consul service
2016/04/14 12:57:49 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AgentForceLeave (4.06s)
=== RUN   TestHTTP_AgentSetServers
2016/04/14 12:57:49 [INFO] serf: EventMemberJoin: Node 17014.global 127.0.0.1
2016/04/14 12:57:49 [INFO] nomad: starting 4 scheduling worker(s) for [system service batch _core]
2016/04/14 12:57:49 [INFO] raft: Node at 127.0.0.1:17014 [Leader] entering Leader state
2016/04/14 12:57:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:49 [DEBUG] raft: Node 127.0.0.1:17014 updated peer set (2): [127.0.0.1:17014]
2016/04/14 12:57:49 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:49 [INFO] nomad: adding server Node 17014.global (Addr: 127.0.0.1:17014) (DC: dc1)
2016/04/14 12:57:49 [INFO] client: using state directory /tmp/NomadClient631936990
2016/04/14 12:57:49 [INFO] client: using alloc directory /tmp/NomadClient081655205
2016/04/14 12:57:49 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:49 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:51 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:53 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:53 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:53 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:53 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:53 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:53 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:53 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:53 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:53 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:53 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:53 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:53 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:53 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:53 [INFO] client: setting server address list: []
2016/04/14 12:57:53 [DEBUG] client: node registration complete
2016/04/14 12:57:53 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:57:53 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:57:53 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:53 [DEBUG] client: state updated to ready
2016/04/14 12:57:53 [WARN] client: port not specified, using default port
2016/04/14 12:57:53 [WARN] client: port not specified, using default port
2016/04/14 12:57:53 [INFO] client: setting server address list: [foo:4647 bar:4647]
2016/04/14 12:57:53 [DEBUG] http: Shutting down http server
2016/04/14 12:57:53 [INFO] agent: requesting shutdown
2016/04/14 12:57:53 [INFO] client: shutting down
2016/04/14 12:57:53 [INFO] nomad: shutting down server
2016/04/14 12:57:53 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:53 [INFO] consul: shutting down consul service
2016/04/14 12:57:53 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AgentSetServers (4.10s)
=== RUN   TestAgent_RPCPing
2016/04/14 12:57:53 [INFO] serf: EventMemberJoin: Node 17017.global 127.0.0.1
2016/04/14 12:57:53 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:57:53 [INFO] client: using state directory /tmp/NomadClient018772767
2016/04/14 12:57:53 [INFO] client: using alloc directory /tmp/NomadClient240850418
2016/04/14 12:57:53 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:53 [INFO] raft: Node at 127.0.0.1:17017 [Leader] entering Leader state
2016/04/14 12:57:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:53 [DEBUG] raft: Node 127.0.0.1:17017 updated peer set (2): [127.0.0.1:17017]
2016/04/14 12:57:53 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:53 [INFO] nomad: adding server Node 17017.global (Addr: 127.0.0.1:17017) (DC: dc1)
2016/04/14 12:57:53 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:55 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:57:57 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:57:57 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:57:57 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:57:57 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:57:57 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:57:57 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:57:57 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:57:57 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:57:57 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:57:57 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:57:57 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:57:57 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:57:57 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:57:57 [INFO] client: setting server address list: []
2016/04/14 12:57:57 [INFO] agent: requesting shutdown
2016/04/14 12:57:57 [INFO] client: shutting down
2016/04/14 12:57:57 [INFO] nomad: shutting down server
2016/04/14 12:57:57 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:57 [INFO] consul: shutting down consul service
2016/04/14 12:57:57 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:57:57 [INFO] agent: shutdown complete
--- PASS: TestAgent_RPCPing (4.06s)
=== RUN   TestAgent_ServerConfig
--- PASS: TestAgent_ServerConfig (0.01s)
=== RUN   TestAgent_ClientConfig
--- PASS: TestAgent_ClientConfig (0.00s)
=== RUN   TestHTTP_AllocsList
2016/04/14 12:57:57 [INFO] serf: EventMemberJoin: Node 17020.global 127.0.0.1
2016/04/14 12:57:57 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:57:57 [INFO] client: using state directory /tmp/NomadClient540575732
2016/04/14 12:57:57 [INFO] client: using alloc directory /tmp/NomadClient430265539
2016/04/14 12:57:57 [INFO] raft: Node at 127.0.0.1:17020 [Leader] entering Leader state
2016/04/14 12:57:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:57 [DEBUG] raft: Node 127.0.0.1:17020 updated peer set (2): [127.0.0.1:17020]
2016/04/14 12:57:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:57 [INFO] nomad: adding server Node 17020.global (Addr: 127.0.0.1:17020) (DC: dc1)
2016/04/14 12:57:57 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:57:57 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:57:59 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:01 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:01 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:01 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:01 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:01 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:01 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:01 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:01 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:01 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:01 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:01 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:01 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:01 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:01 [INFO] client: setting server address list: []
2016/04/14 12:58:01 [DEBUG] client: node registration complete
2016/04/14 12:58:01 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:01 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:01 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:01 [DEBUG] client: state updated to ready
2016/04/14 12:58:01 [DEBUG] http: Shutting down http server
2016/04/14 12:58:01 [INFO] agent: requesting shutdown
2016/04/14 12:58:01 [INFO] client: shutting down
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:01 [INFO] consul: shutting down consul service
2016/04/14 12:58:01 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AllocsList (4.06s)
=== RUN   TestHTTP_AllocsPrefixList
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 17023.global 127.0.0.1
2016/04/14 12:58:01 [INFO] nomad: starting 4 scheduling worker(s) for [system service batch _core]
2016/04/14 12:58:01 [INFO] client: using state directory /tmp/NomadClient945659373
2016/04/14 12:58:01 [INFO] client: using alloc directory /tmp/NomadClient901535336
2016/04/14 12:58:01 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:17023 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:17023 updated peer set (2): [127.0.0.1:17023]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 17023.global (Addr: 127.0.0.1:17023) (DC: dc1)
2016/04/14 12:58:01 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:03 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:05 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:05 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:05 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:05 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:05 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:05 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:05 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:05 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:05 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:05 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:05 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:05 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:05 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:05 [INFO] client: setting server address list: []
2016/04/14 12:58:05 [DEBUG] client: node registration complete
2016/04/14 12:58:05 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:05 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:05 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:05 [DEBUG] client: state updated to ready
2016/04/14 12:58:05 [DEBUG] http: Shutting down http server
2016/04/14 12:58:05 [INFO] agent: requesting shutdown
2016/04/14 12:58:05 [INFO] client: shutting down
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:05 [INFO] consul: shutting down consul service
2016/04/14 12:58:05 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AllocsPrefixList (4.07s)
=== RUN   TestHTTP_AllocQuery
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 17026.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:17026 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:17026 updated peer set (2): [127.0.0.1:17026]
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [INFO] nomad: adding server Node 17026.global (Addr: 127.0.0.1:17026) (DC: dc1)
2016/04/14 12:58:05 [INFO] client: using state directory /tmp/NomadClient033638362
2016/04/14 12:58:05 [INFO] client: using alloc directory /tmp/NomadClient714741617
2016/04/14 12:58:05 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:05 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:07 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:09 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:09 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:09 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:09 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:09 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:09 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:09 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:09 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:09 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:09 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:09 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:09 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:09 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:09 [INFO] client: setting server address list: []
2016/04/14 12:58:09 [DEBUG] client: node registration complete
2016/04/14 12:58:09 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:09 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:09 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:09 [DEBUG] client: state updated to ready
2016/04/14 12:58:09 [DEBUG] http: Shutting down http server
2016/04/14 12:58:09 [INFO] agent: requesting shutdown
2016/04/14 12:58:09 [INFO] client: shutting down
2016/04/14 12:58:09 [INFO] nomad: shutting down server
2016/04/14 12:58:09 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:09 [INFO] consul: shutting down consul service
2016/04/14 12:58:09 [INFO] agent: shutdown complete
--- PASS: TestHTTP_AllocQuery (4.11s)
=== RUN   TestCommand_Implements
--- PASS: TestCommand_Implements (0.00s)
=== RUN   TestCommand_Args
--- PASS: TestCommand_Args (0.72s)
=== RUN   TestRetryJoin
2016/04/14 12:58:10 [INFO] serf: EventMemberJoin: Node 17029.global 127.0.0.1
2016/04/14 12:58:10 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:10 [INFO] client: using state directory /tmp/NomadClient509421486
2016/04/14 12:58:10 [INFO] client: using alloc directory /tmp/NomadClient678574901
2016/04/14 12:58:10 [INFO] raft: Node at 127.0.0.1:17029 [Leader] entering Leader state
2016/04/14 12:58:10 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:10 [DEBUG] raft: Node 127.0.0.1:17029 updated peer set (2): [127.0.0.1:17029]
2016/04/14 12:58:10 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:10 [INFO] nomad: adding server Node 17029.global (Addr: 127.0.0.1:17029) (DC: dc1)
2016/04/14 12:58:10 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:10 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:12 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:14 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:14 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:14 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:14 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:14 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:14 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:14 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:14 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:14 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:14 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:14 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:14 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:14 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:14 [INFO] client: setting server address list: []
2016/04/14 12:58:14 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:14 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:14 [DEBUG] client: node registration complete
2016/04/14 12:58:14 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:14 [DEBUG] client: state updated to ready
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:35360
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: "Node 17031".global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server "Node 17031".global (Addr: 127.0.0.1:4647) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:17029 updated peer set (2): [127.0.0.1:4647 127.0.0.1:17029]
2016/04/14 12:58:14 [INFO] raft: Added peer 127.0.0.1:4647, starting replication
2016/04/14 12:58:14 [WARN] raft: AppendEntries to 127.0.0.1:4647 rejected, sending older logs (next: 1)
2016/04/14 12:58:14 [WARN] raft: Failed to contact 127.0.0.1:4647 in 20.301ms
2016/04/14 12:58:14 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:17029 [Follower] entering Follower state
2016/04/14 12:58:14 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:14 [ERR] nomad: failed to add raft peer: leadership lost while committing log
2016/04/14 12:58:14 [ERR] nomad: failed to reconcile member: {"Node 17031".global 127.0.0.1 4648 map[build: port:4647 role:nomad region:global dc:dc1 vsn:1 vsn_min:1 vsn_max:1] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/04/14 12:58:15 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:15 [INFO] raft: Node at 127.0.0.1:17029 [Candidate] entering Candidate state
2016/04/14 12:58:15 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:15 [DEBUG] raft: Vote granted from 127.0.0.1:17029. Tally: 1
2016/04/14 12:58:15 [ERR] raft: Failed to AppendEntries to 127.0.0.1:4647: EOF
2016/04/14 12:58:15 [ERR] raft: Failed to heartbeat to 127.0.0.1:4647: EOF
2016/04/14 12:58:15 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:15 [INFO] raft: Node at 127.0.0.1:17029 [Candidate] entering Candidate state
2016/04/14 12:58:15 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:15 [DEBUG] raft: Vote granted from 127.0.0.1:17029. Tally: 1
2016/04/14 12:58:15 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:4647: read tcp 127.0.0.1:34534->127.0.0.1:4647: read: connection reset by peer
2016/04/14 12:58:15 [INFO] agent: requesting shutdown
2016/04/14 12:58:15 [INFO] client: shutting down
2016/04/14 12:58:15 [INFO] nomad: shutting down server
2016/04/14 12:58:15 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:15 [INFO] consul: shutting down consul service
2016/04/14 12:58:15 [DEBUG] memberlist: Failed UDP ping: "Node 17031".global (timeout reached)
2016/04/14 12:58:15 [INFO] memberlist: Suspect "Node 17031".global has failed, no acks received
2016/04/14 12:58:15 [INFO] memberlist: Marking "Node 17031".global as failed, suspect timeout reached
2016/04/14 12:58:15 [INFO] serf: EventMemberFailed: "Node 17031".global 127.0.0.1
2016/04/14 12:58:15 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:4647: read tcp 127.0.0.1:34535->127.0.0.1:4647: i/o timeout
2016/04/14 12:58:15 [INFO] agent: shutdown complete
--- PASS: TestRetryJoin (5.27s)
	command_test.go:114: bad: 1
=== RUN   TestConfig_Parse
--- FAIL: TestConfig_Parse (0.00s)
	config_parse_test.go:108: Testing parse: basic.hcl
	config_parse_test.go:118: file: basic.hcl
		
		open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/command/agent/config-test-fixtures/basic.hcl: no such file or directory
=== RUN   TestConfig_Merge
--- PASS: TestConfig_Merge (0.00s)
=== RUN   TestConfig_ParseConfigFile
illegal
illegal
illegal
illegal
--- PASS: TestConfig_ParseConfigFile (0.00s)
=== RUN   TestConfig_LoadConfigDir
illegal
illegal
illegal
illegal
--- PASS: TestConfig_LoadConfigDir (0.01s)
=== RUN   TestConfig_LoadConfig
--- PASS: TestConfig_LoadConfig (0.00s)
=== RUN   TestConfig_LoadConfigsFileOrder
--- FAIL: TestConfig_LoadConfigsFileOrder (0.00s)
	config_test.go:333: Failed to load config: open test-resources/etcnomad: no such file or directory
=== RUN   TestConfig_Listener
--- PASS: TestConfig_Listener (0.02s)
=== RUN   TestResources_ParseReserved
--- PASS: TestResources_ParseReserved (0.00s)
=== RUN   TestHTTP_EvalList
2016/04/14 12:58:15 [INFO] serf: EventMemberJoin: Node 17033.global 127.0.0.1
2016/04/14 12:58:15 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:15 [INFO] client: using state directory /tmp/NomadClient742835990
2016/04/14 12:58:15 [INFO] client: using alloc directory /tmp/NomadClient379748733
2016/04/14 12:58:15 [INFO] raft: Node at 127.0.0.1:17033 [Leader] entering Leader state
2016/04/14 12:58:15 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:15 [DEBUG] raft: Node 127.0.0.1:17033 updated peer set (2): [127.0.0.1:17033]
2016/04/14 12:58:15 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:15 [INFO] nomad: adding server Node 17033.global (Addr: 127.0.0.1:17033) (DC: dc1)
2016/04/14 12:58:15 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:15 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:17 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:19 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:19 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:19 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:19 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:19 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:19 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:19 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:19 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:19 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:19 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:19 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:19 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:19 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:19 [INFO] client: setting server address list: []
2016/04/14 12:58:19 [DEBUG] client: node registration complete
2016/04/14 12:58:19 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:19 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:19 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:19 [DEBUG] client: state updated to ready
2016/04/14 12:58:19 [DEBUG] http: Shutting down http server
2016/04/14 12:58:19 [INFO] agent: requesting shutdown
2016/04/14 12:58:19 [INFO] client: shutting down
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:19 [INFO] consul: shutting down consul service
2016/04/14 12:58:19 [INFO] agent: shutdown complete
--- PASS: TestHTTP_EvalList (4.09s)
=== RUN   TestHTTP_EvalPrefixList
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 17036.global 127.0.0.1
2016/04/14 12:58:19 [INFO] nomad: starting 4 scheduling worker(s) for [batch system service _core]
2016/04/14 12:58:19 [INFO] client: using state directory /tmp/NomadClient164737207
2016/04/14 12:58:19 [INFO] client: using alloc directory /tmp/NomadClient695373994
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:17036 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:17036 updated peer set (2): [127.0.0.1:17036]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 17036.global (Addr: 127.0.0.1:17036) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:19 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:21 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:23 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:23 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:23 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:23 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:23 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:23 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:23 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:23 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:23 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:23 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:23 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:23 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:23 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:23 [INFO] client: setting server address list: []
2016/04/14 12:58:23 [DEBUG] client: node registration complete
2016/04/14 12:58:23 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:23 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:23 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:23 [DEBUG] client: state updated to ready
2016/04/14 12:58:23 [DEBUG] http: Shutting down http server
2016/04/14 12:58:23 [INFO] agent: requesting shutdown
2016/04/14 12:58:23 [INFO] client: shutting down
2016/04/14 12:58:23 [INFO] nomad: shutting down server
2016/04/14 12:58:23 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:23 [INFO] consul: shutting down consul service
2016/04/14 12:58:23 [INFO] agent: shutdown complete
--- PASS: TestHTTP_EvalPrefixList (4.06s)
=== RUN   TestHTTP_EvalAllocations
2016/04/14 12:58:23 [INFO] serf: EventMemberJoin: Node 17039.global 127.0.0.1
2016/04/14 12:58:23 [INFO] nomad: starting 4 scheduling worker(s) for [batch system service _core]
2016/04/14 12:58:23 [INFO] raft: Node at 127.0.0.1:17039 [Leader] entering Leader state
2016/04/14 12:58:23 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:23 [DEBUG] raft: Node 127.0.0.1:17039 updated peer set (2): [127.0.0.1:17039]
2016/04/14 12:58:23 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:23 [INFO] nomad: adding server Node 17039.global (Addr: 127.0.0.1:17039) (DC: dc1)
2016/04/14 12:58:23 [INFO] client: using state directory /tmp/NomadClient840285548
2016/04/14 12:58:23 [INFO] client: using alloc directory /tmp/NomadClient949444571
2016/04/14 12:58:23 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:23 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:25 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:27 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:27 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:27 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:27 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:27 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:27 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:27 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:27 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:27 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:27 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:27 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:27 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:27 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:27 [INFO] client: setting server address list: []
2016/04/14 12:58:27 [DEBUG] client: node registration complete
2016/04/14 12:58:27 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:27 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:27 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:27 [DEBUG] client: state updated to ready
2016/04/14 12:58:27 [DEBUG] http: Shutting down http server
2016/04/14 12:58:27 [INFO] agent: requesting shutdown
2016/04/14 12:58:27 [INFO] client: shutting down
2016/04/14 12:58:27 [INFO] nomad: shutting down server
2016/04/14 12:58:27 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:27 [INFO] consul: shutting down consul service
2016/04/14 12:58:27 [INFO] agent: shutdown complete
--- PASS: TestHTTP_EvalAllocations (4.06s)
=== RUN   TestHTTP_EvalQuery
2016/04/14 12:58:27 [INFO] serf: EventMemberJoin: Node 17042.global 127.0.0.1
2016/04/14 12:58:27 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:27 [INFO] client: using state directory /tmp/NomadClient929272005
2016/04/14 12:58:27 [INFO] client: using alloc directory /tmp/NomadClient055112032
2016/04/14 12:58:27 [INFO] raft: Node at 127.0.0.1:17042 [Leader] entering Leader state
2016/04/14 12:58:27 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:27 [DEBUG] raft: Node 127.0.0.1:17042 updated peer set (2): [127.0.0.1:17042]
2016/04/14 12:58:27 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:27 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:27 [INFO] nomad: adding server Node 17042.global (Addr: 127.0.0.1:17042) (DC: dc1)
2016/04/14 12:58:27 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:29 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:31 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:31 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:31 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:31 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:31 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:31 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:31 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:31 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:31 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:31 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:31 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:31 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:31 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:31 [INFO] client: setting server address list: []
2016/04/14 12:58:31 [DEBUG] client: node registration complete
2016/04/14 12:58:31 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:31 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:31 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:31 [DEBUG] client: state updated to ready
2016/04/14 12:58:31 [DEBUG] http: Shutting down http server
2016/04/14 12:58:31 [INFO] agent: requesting shutdown
2016/04/14 12:58:31 [INFO] client: shutting down
2016/04/14 12:58:31 [INFO] nomad: shutting down server
2016/04/14 12:58:31 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:31 [INFO] consul: shutting down consul service
2016/04/14 12:58:31 [INFO] agent: shutdown complete
--- PASS: TestHTTP_EvalQuery (4.11s)
=== RUN   TestAllocDirFS_List_MissingParams
2016/04/14 12:58:31 [INFO] raft: Node at 127.0.0.1:17045 [Leader] entering Leader state
2016/04/14 12:58:31 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:31 [DEBUG] raft: Node 127.0.0.1:17045 updated peer set (2): [127.0.0.1:17045]
2016/04/14 12:58:32 [INFO] serf: EventMemberJoin: Node 17045.global 127.0.0.1
2016/04/14 12:58:32 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:32 [INFO] client: using state directory /tmp/NomadClient475429778
2016/04/14 12:58:32 [INFO] client: using alloc directory /tmp/NomadClient235006409
2016/04/14 12:58:32 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:32 [INFO] nomad: adding server Node 17045.global (Addr: 127.0.0.1:17045) (DC: dc1)
2016/04/14 12:58:32 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:32 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:34 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:36 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:36 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:36 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:36 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:36 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:36 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:36 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:36 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:36 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:36 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:36 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:36 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:36 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:36 [INFO] client: setting server address list: []
2016/04/14 12:58:36 [DEBUG] client: node registration complete
2016/04/14 12:58:36 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:36 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:36 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:36 [DEBUG] client: state updated to ready
2016/04/14 12:58:36 [DEBUG] http: Shutting down http server
2016/04/14 12:58:36 [INFO] agent: requesting shutdown
2016/04/14 12:58:36 [INFO] client: shutting down
2016/04/14 12:58:36 [INFO] nomad: shutting down server
2016/04/14 12:58:36 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:36 [INFO] consul: shutting down consul service
2016/04/14 12:58:36 [INFO] agent: shutdown complete
--- PASS: TestAllocDirFS_List_MissingParams (4.08s)
=== RUN   TestAllocDirFS_Stat_MissingParams
2016/04/14 12:58:36 [INFO] serf: EventMemberJoin: Node 17048.global 127.0.0.1
2016/04/14 12:58:36 [INFO] nomad: starting 4 scheduling worker(s) for [system service batch _core]
2016/04/14 12:58:36 [INFO] raft: Node at 127.0.0.1:17048 [Leader] entering Leader state
2016/04/14 12:58:36 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:36 [DEBUG] raft: Node 127.0.0.1:17048 updated peer set (2): [127.0.0.1:17048]
2016/04/14 12:58:36 [INFO] client: using state directory /tmp/NomadClient771027171
2016/04/14 12:58:36 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:36 [INFO] nomad: adding server Node 17048.global (Addr: 127.0.0.1:17048) (DC: dc1)
2016/04/14 12:58:36 [INFO] client: using alloc directory /tmp/NomadClient492626662
2016/04/14 12:58:36 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:36 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:38 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:40 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:40 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:40 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:40 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:40 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:40 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:40 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:40 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:40 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:40 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:40 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:40 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:40 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:40 [INFO] client: setting server address list: []
2016/04/14 12:58:40 [DEBUG] client: node registration complete
2016/04/14 12:58:40 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:40 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:40 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:40 [DEBUG] client: state updated to ready
2016/04/14 12:58:40 [DEBUG] http: Shutting down http server
2016/04/14 12:58:40 [INFO] agent: requesting shutdown
2016/04/14 12:58:40 [INFO] client: shutting down
2016/04/14 12:58:40 [INFO] nomad: shutting down server
2016/04/14 12:58:40 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:40 [INFO] consul: shutting down consul service
2016/04/14 12:58:40 [INFO] agent: shutdown complete
--- PASS: TestAllocDirFS_Stat_MissingParams (4.06s)
=== RUN   TestAllocDirFS_ReadAt_MissingParams
2016/04/14 12:58:40 [INFO] serf: EventMemberJoin: Node 17051.global 127.0.0.1
2016/04/14 12:58:40 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:40 [INFO] client: using state directory /tmp/NomadClient467783944
2016/04/14 12:58:40 [INFO] client: using alloc directory /tmp/NomadClient001732295
2016/04/14 12:58:40 [INFO] raft: Node at 127.0.0.1:17051 [Leader] entering Leader state
2016/04/14 12:58:40 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:40 [DEBUG] raft: Node 127.0.0.1:17051 updated peer set (2): [127.0.0.1:17051]
2016/04/14 12:58:40 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:40 [INFO] nomad: adding server Node 17051.global (Addr: 127.0.0.1:17051) (DC: dc1)
2016/04/14 12:58:40 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:40 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:42 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:44 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:44 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:44 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:44 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:44 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:44 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:44 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:44 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:44 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:44 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:44 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:44 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:44 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:44 [INFO] client: setting server address list: []
2016/04/14 12:58:44 [DEBUG] client: node registration complete
2016/04/14 12:58:44 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:58:44 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:58:44 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:44 [DEBUG] client: state updated to ready
2016/04/14 12:58:44 [DEBUG] http: Shutting down http server
2016/04/14 12:58:44 [INFO] agent: requesting shutdown
2016/04/14 12:58:44 [INFO] client: shutting down
2016/04/14 12:58:44 [INFO] nomad: shutting down server
2016/04/14 12:58:44 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:44 [INFO] consul: shutting down consul service
2016/04/14 12:58:44 [INFO] agent: shutdown complete
--- PASS: TestAllocDirFS_ReadAt_MissingParams (4.05s)
=== RUN   TestSetIndex
--- PASS: TestSetIndex (0.00s)
=== RUN   TestSetKnownLeader
--- PASS: TestSetKnownLeader (0.00s)
=== RUN   TestSetLastContact
--- PASS: TestSetLastContact (0.00s)
=== RUN   TestSetMeta
--- PASS: TestSetMeta (0.00s)
=== RUN   TestSetHeaders
2016/04/14 12:58:44 [INFO] serf: EventMemberJoin: Node 17054.global 127.0.0.1
2016/04/14 12:58:44 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:44 [INFO] raft: Node at 127.0.0.1:17054 [Leader] entering Leader state
2016/04/14 12:58:44 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:44 [DEBUG] raft: Node 127.0.0.1:17054 updated peer set (2): [127.0.0.1:17054]
2016/04/14 12:58:44 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:44 [INFO] nomad: adding server Node 17054.global (Addr: 127.0.0.1:17054) (DC: dc1)
2016/04/14 12:58:44 [INFO] client: using state directory /tmp/NomadClient219068561
2016/04/14 12:58:44 [INFO] client: using alloc directory /tmp/NomadClient926505660
2016/04/14 12:58:44 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:44 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:46 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:48 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:48 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:48 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:48 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:48 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:48 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:48 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:48 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:48 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:48 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:48 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:48 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:48 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:48 [INFO] client: setting server address list: []
2016/04/14 12:58:48 [DEBUG] http: Request /v1/kv/key (2.246ms)
2016/04/14 12:58:48 [DEBUG] http: Shutting down http server
2016/04/14 12:58:48 [INFO] agent: requesting shutdown
2016/04/14 12:58:48 [INFO] client: shutting down
2016/04/14 12:58:48 [INFO] nomad: shutting down server
2016/04/14 12:58:48 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:48 [INFO] consul: shutting down consul service
2016/04/14 12:58:48 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:48 [INFO] agent: shutdown complete
--- PASS: TestSetHeaders (4.12s)
=== RUN   TestContentTypeIsJSON
2016/04/14 12:58:48 [INFO] serf: EventMemberJoin: Node 17057.global 127.0.0.1
2016/04/14 12:58:48 [INFO] nomad: starting 4 scheduling worker(s) for [batch system service _core]
2016/04/14 12:58:48 [INFO] client: using state directory /tmp/NomadClient367566670
2016/04/14 12:58:48 [INFO] client: using alloc directory /tmp/NomadClient539003477
2016/04/14 12:58:48 [INFO] raft: Node at 127.0.0.1:17057 [Leader] entering Leader state
2016/04/14 12:58:48 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:48 [DEBUG] raft: Node 127.0.0.1:17057 updated peer set (2): [127.0.0.1:17057]
2016/04/14 12:58:48 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:48 [INFO] nomad: adding server Node 17057.global (Addr: 127.0.0.1:17057) (DC: dc1)
2016/04/14 12:58:48 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:48 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:50 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:52 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:52 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:52 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:52 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:52 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:52 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:52 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:52 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:52 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:52 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:52 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:52 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:52 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:52 [INFO] client: setting server address list: []
2016/04/14 12:58:52 [DEBUG] http: Request /v1/kv/key (151.667µs)
2016/04/14 12:58:52 [DEBUG] http: Shutting down http server
2016/04/14 12:58:52 [INFO] agent: requesting shutdown
2016/04/14 12:58:52 [INFO] client: shutting down
2016/04/14 12:58:52 [INFO] nomad: shutting down server
2016/04/14 12:58:52 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:52 [INFO] consul: shutting down consul service
2016/04/14 12:58:52 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:52 [INFO] agent: shutdown complete
--- PASS: TestContentTypeIsJSON (4.05s)
=== RUN   TestPrettyPrint
2016/04/14 12:58:52 [INFO] serf: EventMemberJoin: Node 17060.global 127.0.0.1
2016/04/14 12:58:52 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:52 [INFO] raft: Node at 127.0.0.1:17060 [Leader] entering Leader state
2016/04/14 12:58:52 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:52 [DEBUG] raft: Node 127.0.0.1:17060 updated peer set (2): [127.0.0.1:17060]
2016/04/14 12:58:52 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:52 [INFO] nomad: adding server Node 17060.global (Addr: 127.0.0.1:17060) (DC: dc1)
2016/04/14 12:58:52 [INFO] client: using state directory /tmp/NomadClient739111247
2016/04/14 12:58:52 [INFO] client: using alloc directory /tmp/NomadClient089714786
2016/04/14 12:58:52 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:52 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:54 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:58:56 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:58:56 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:58:56 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:58:56 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:58:56 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:58:56 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:58:56 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:58:56 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:58:56 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:58:56 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:58:56 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:58:56 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:58:56 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:58:56 [INFO] client: setting server address list: []
2016/04/14 12:58:56 [DEBUG] http: Request /v1/job/foo?pretty=1 (258µs)
2016/04/14 12:58:56 [DEBUG] http: Shutting down http server
2016/04/14 12:58:56 [INFO] agent: requesting shutdown
2016/04/14 12:58:56 [INFO] client: shutting down
2016/04/14 12:58:56 [INFO] nomad: shutting down server
2016/04/14 12:58:56 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:56 [INFO] consul: shutting down consul service
2016/04/14 12:58:56 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:56 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:58:56 [INFO] agent: shutdown complete
--- PASS: TestPrettyPrint (4.04s)
=== RUN   TestPrettyPrintBare
2016/04/14 12:58:56 [INFO] serf: EventMemberJoin: Node 17063.global 127.0.0.1
2016/04/14 12:58:56 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:58:56 [INFO] raft: Node at 127.0.0.1:17063 [Leader] entering Leader state
2016/04/14 12:58:56 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:56 [DEBUG] raft: Node 127.0.0.1:17063 updated peer set (2): [127.0.0.1:17063]
2016/04/14 12:58:56 [INFO] client: using state directory /tmp/NomadClient685380068
2016/04/14 12:58:56 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:56 [INFO] nomad: adding server Node 17063.global (Addr: 127.0.0.1:17063) (DC: dc1)
2016/04/14 12:58:56 [INFO] client: using alloc directory /tmp/NomadClient058428403
2016/04/14 12:58:56 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:58:56 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:58:58 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:00 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:00 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:00 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:00 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:00 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:00 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:00 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:00 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:00 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:00 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:00 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:00 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:00 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:00 [INFO] client: setting server address list: []
2016/04/14 12:59:00 [DEBUG] http: Request /v1/job/foo?pretty (235.333µs)
2016/04/14 12:59:00 [DEBUG] http: Shutting down http server
2016/04/14 12:59:00 [INFO] agent: requesting shutdown
2016/04/14 12:59:00 [INFO] client: shutting down
2016/04/14 12:59:00 [INFO] nomad: shutting down server
2016/04/14 12:59:00 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:00 [INFO] consul: shutting down consul service
2016/04/14 12:59:00 [INFO] nomad: cluster leadership lost
2016/04/14 12:59:00 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:00 [INFO] agent: shutdown complete
--- PASS: TestPrettyPrintBare (4.04s)
=== RUN   TestParseWait
--- PASS: TestParseWait (0.00s)
=== RUN   TestParseWait_InvalidTime
--- PASS: TestParseWait_InvalidTime (0.00s)
=== RUN   TestParseWait_InvalidIndex
--- PASS: TestParseWait_InvalidIndex (0.00s)
=== RUN   TestParseConsistency
--- PASS: TestParseConsistency (0.00s)
=== RUN   TestParseRegion
2016/04/14 12:59:00 [INFO] serf: EventMemberJoin: Node 17066.global 127.0.0.1
2016/04/14 12:59:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:00 [INFO] raft: Node at 127.0.0.1:17066 [Leader] entering Leader state
2016/04/14 12:59:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:00 [DEBUG] raft: Node 127.0.0.1:17066 updated peer set (2): [127.0.0.1:17066]
2016/04/14 12:59:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:00 [INFO] nomad: adding server Node 17066.global (Addr: 127.0.0.1:17066) (DC: dc1)
2016/04/14 12:59:00 [INFO] client: using state directory /tmp/NomadClient326125725
2016/04/14 12:59:00 [INFO] client: using alloc directory /tmp/NomadClient224801112
2016/04/14 12:59:00 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:00 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:02 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:04 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:04 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:04 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:04 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:04 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:04 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:04 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:04 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:04 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:04 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:04 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:04 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:04 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:04 [INFO] client: setting server address list: []
2016/04/14 12:59:04 [DEBUG] http: Shutting down http server
2016/04/14 12:59:04 [INFO] agent: requesting shutdown
2016/04/14 12:59:04 [INFO] client: shutting down
2016/04/14 12:59:04 [INFO] nomad: shutting down server
2016/04/14 12:59:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:04 [INFO] consul: shutting down consul service
2016/04/14 12:59:04 [INFO] nomad: cluster leadership lost
2016/04/14 12:59:04 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:04 [INFO] agent: shutdown complete
--- PASS: TestParseRegion (4.04s)
=== RUN   TestHTTP_JobsList
2016/04/14 12:59:04 [INFO] serf: EventMemberJoin: Node 17069.global 127.0.0.1
2016/04/14 12:59:04 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:04 [INFO] raft: Node at 127.0.0.1:17069 [Leader] entering Leader state
2016/04/14 12:59:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:04 [DEBUG] raft: Node 127.0.0.1:17069 updated peer set (2): [127.0.0.1:17069]
2016/04/14 12:59:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:04 [INFO] nomad: adding server Node 17069.global (Addr: 127.0.0.1:17069) (DC: dc1)
2016/04/14 12:59:04 [INFO] client: using state directory /tmp/NomadClient361547850
2016/04/14 12:59:04 [INFO] client: using alloc directory /tmp/NomadClient595378465
2016/04/14 12:59:04 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:04 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:06 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:08 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:08 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:08 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:08 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:08 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:08 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:08 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:08 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:08 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:08 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:08 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:08 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:08 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:08 [INFO] client: setting server address list: []
2016/04/14 12:59:08 [DEBUG] client: node registration complete
2016/04/14 12:59:08 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:08 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:08 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:08 [DEBUG] client: state updated to ready
2016/04/14 12:59:08 [DEBUG] worker: dequeued evaluation 6c61559b-1792-fce1-903d-92af899a4f18
2016/04/14 12:59:08 [DEBUG] sched: <Eval '6c61559b-1792-fce1-903d-92af899a4f18' JobID: '1c08cfd5-446f-7fd8-d0c3-9c85b1a2cdbe'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:59:08 [DEBUG] http: Shutting down http server
2016/04/14 12:59:08 [INFO] agent: requesting shutdown
2016/04/14 12:59:08 [INFO] client: shutting down
2016/04/14 12:59:08 [INFO] nomad: shutting down server
2016/04/14 12:59:08 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:08 [DEBUG] worker: created evaluation <Eval 'e663c4e5-8192-1307-4d2c-58d24d49ad5d' JobID: '1c08cfd5-446f-7fd8-d0c3-9c85b1a2cdbe'>
2016/04/14 12:59:08 [DEBUG] sched: <Eval '6c61559b-1792-fce1-903d-92af899a4f18' JobID: '1c08cfd5-446f-7fd8-d0c3-9c85b1a2cdbe'>: failed to place all allocations, blocked eval 'e663c4e5-8192-1307-4d2c-58d24d49ad5d' created
2016/04/14 12:59:08 [ERR] worker: failed to nack evaluation '6c61559b-1792-fce1-903d-92af899a4f18': No cluster leader
2016/04/14 12:59:08 [INFO] consul: shutting down consul service
2016/04/14 12:59:08 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobsList (4.16s)
=== RUN   TestHTTP_PrefixJobsList
2016/04/14 12:59:08 [INFO] serf: EventMemberJoin: Node 17072.global 127.0.0.1
2016/04/14 12:59:08 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:08 [INFO] client: using state directory /tmp/NomadClient033998843
2016/04/14 12:59:08 [INFO] client: using alloc directory /tmp/NomadClient283457310
2016/04/14 12:59:08 [INFO] raft: Node at 127.0.0.1:17072 [Leader] entering Leader state
2016/04/14 12:59:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:08 [DEBUG] raft: Node 127.0.0.1:17072 updated peer set (2): [127.0.0.1:17072]
2016/04/14 12:59:08 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:08 [INFO] nomad: adding server Node 17072.global (Addr: 127.0.0.1:17072) (DC: dc1)
2016/04/14 12:59:08 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:08 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:10 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:12 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:12 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:12 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:12 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:12 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:12 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:12 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:12 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:12 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:12 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:12 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:12 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:12 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:12 [INFO] client: setting server address list: []
2016/04/14 12:59:12 [DEBUG] client: node registration complete
2016/04/14 12:59:12 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:12 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:12 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:12 [DEBUG] client: state updated to ready
2016/04/14 12:59:12 [DEBUG] worker: dequeued evaluation edc05221-5cd9-92d3-6a5a-7e906b513924
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'edc05221-5cd9-92d3-6a5a-7e906b513924' JobID: 'aaaaaaaa-e8f7-fd38-c855-ab94ceb89706'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:59:12 [DEBUG] worker: created evaluation <Eval 'a4c8334a-5635-2015-09cf-79f5e93af614' JobID: 'aaaaaaaa-e8f7-fd38-c855-ab94ceb89706'>
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'edc05221-5cd9-92d3-6a5a-7e906b513924' JobID: 'aaaaaaaa-e8f7-fd38-c855-ab94ceb89706'>: failed to place all allocations, blocked eval 'a4c8334a-5635-2015-09cf-79f5e93af614' created
2016/04/14 12:59:12 [DEBUG] worker: submitted plan for evaluation edc05221-5cd9-92d3-6a5a-7e906b513924
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'edc05221-5cd9-92d3-6a5a-7e906b513924' JobID: 'aaaaaaaa-e8f7-fd38-c855-ab94ceb89706'>: setting status to complete
2016/04/14 12:59:12 [DEBUG] worker: updated evaluation <Eval 'edc05221-5cd9-92d3-6a5a-7e906b513924' JobID: 'aaaaaaaa-e8f7-fd38-c855-ab94ceb89706'>
2016/04/14 12:59:12 [DEBUG] worker: ack for evaluation edc05221-5cd9-92d3-6a5a-7e906b513924
2016/04/14 12:59:12 [DEBUG] worker: dequeued evaluation ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7' JobID: 'aabbbbbb-e8f7-fd38-c855-ab94ceb89706'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:59:12 [DEBUG] worker: created evaluation <Eval 'eff56b23-afd6-85fd-e680-b29e55cfa20a' JobID: 'aabbbbbb-e8f7-fd38-c855-ab94ceb89706'>
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7' JobID: 'aabbbbbb-e8f7-fd38-c855-ab94ceb89706'>: failed to place all allocations, blocked eval 'eff56b23-afd6-85fd-e680-b29e55cfa20a' created
2016/04/14 12:59:12 [DEBUG] worker: submitted plan for evaluation ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7
2016/04/14 12:59:12 [DEBUG] sched: <Eval 'ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7' JobID: 'aabbbbbb-e8f7-fd38-c855-ab94ceb89706'>: setting status to complete
2016/04/14 12:59:12 [DEBUG] worker: updated evaluation <Eval 'ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7' JobID: 'aabbbbbb-e8f7-fd38-c855-ab94ceb89706'>
2016/04/14 12:59:12 [DEBUG] worker: ack for evaluation ad5c0c2e-e63e-7cd5-ecf2-0d950b7acea7
2016/04/14 12:59:12 [DEBUG] http: Shutting down http server
2016/04/14 12:59:12 [INFO] agent: requesting shutdown
2016/04/14 12:59:12 [INFO] client: shutting down
2016/04/14 12:59:12 [INFO] nomad: shutting down server
2016/04/14 12:59:12 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:12 [INFO] consul: shutting down consul service
2016/04/14 12:59:12 [INFO] agent: shutdown complete
--- PASS: TestHTTP_PrefixJobsList (4.09s)
=== RUN   TestHTTP_JobsRegister
2016/04/14 12:59:12 [INFO] serf: EventMemberJoin: Node 17075.global 127.0.0.1
2016/04/14 12:59:12 [INFO] nomad: starting 4 scheduling worker(s) for [batch system service _core]
2016/04/14 12:59:12 [INFO] client: using state directory /tmp/NomadClient907918336
2016/04/14 12:59:12 [INFO] client: using alloc directory /tmp/NomadClient819560287
2016/04/14 12:59:12 [INFO] raft: Node at 127.0.0.1:17075 [Leader] entering Leader state
2016/04/14 12:59:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:12 [DEBUG] raft: Node 127.0.0.1:17075 updated peer set (2): [127.0.0.1:17075]
2016/04/14 12:59:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:12 [INFO] nomad: adding server Node 17075.global (Addr: 127.0.0.1:17075) (DC: dc1)
2016/04/14 12:59:12 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:12 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:14 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:16 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:16 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:16 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:16 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:16 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:16 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:16 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:16 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:16 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:16 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:16 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:16 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:16 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:16 [INFO] client: setting server address list: []
2016/04/14 12:59:16 [DEBUG] client: node registration complete
2016/04/14 12:59:16 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:16 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:16 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:16 [DEBUG] client: state updated to ready
2016/04/14 12:59:16 [DEBUG] http: Shutting down http server
2016/04/14 12:59:16 [INFO] agent: requesting shutdown
2016/04/14 12:59:16 [INFO] client: shutting down
2016/04/14 12:59:16 [INFO] nomad: shutting down server
2016/04/14 12:59:16 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:16 [DEBUG] worker: dequeued evaluation e3ddc46f-6be6-cb23-7028-6873b3493688
2016/04/14 12:59:16 [ERR] worker: failed to nack evaluation 'e3ddc46f-6be6-cb23-7028-6873b3493688': No cluster leader
2016/04/14 12:59:16 [INFO] consul: shutting down consul service
2016/04/14 12:59:16 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobsRegister (4.07s)
=== RUN   TestHTTP_JobQuery
2016/04/14 12:59:16 [INFO] serf: EventMemberJoin: Node 17078.global 127.0.0.1
2016/04/14 12:59:16 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:16 [INFO] raft: Node at 127.0.0.1:17078 [Leader] entering Leader state
2016/04/14 12:59:16 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:16 [DEBUG] raft: Node 127.0.0.1:17078 updated peer set (2): [127.0.0.1:17078]
2016/04/14 12:59:16 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:16 [INFO] nomad: adding server Node 17078.global (Addr: 127.0.0.1:17078) (DC: dc1)
2016/04/14 12:59:16 [INFO] client: using state directory /tmp/NomadClient376660713
2016/04/14 12:59:16 [INFO] client: using alloc directory /tmp/NomadClient041209652
2016/04/14 12:59:16 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:16 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:18 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:20 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:20 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:20 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:20 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:20 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:20 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:20 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:20 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:20 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:20 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:20 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:20 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:20 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:20 [INFO] client: setting server address list: []
2016/04/14 12:59:20 [DEBUG] client: node registration complete
2016/04/14 12:59:20 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:20 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:20 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:20 [DEBUG] client: state updated to ready
2016/04/14 12:59:20 [DEBUG] http: Shutting down http server
2016/04/14 12:59:20 [INFO] agent: requesting shutdown
2016/04/14 12:59:20 [INFO] client: shutting down
2016/04/14 12:59:20 [INFO] nomad: shutting down server
2016/04/14 12:59:20 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:20 [DEBUG] worker: dequeued evaluation eb630c64-9802-8907-2679-ec4fe9aec284
2016/04/14 12:59:20 [ERR] worker: failed to nack evaluation 'eb630c64-9802-8907-2679-ec4fe9aec284': No cluster leader
2016/04/14 12:59:20 [INFO] consul: shutting down consul service
2016/04/14 12:59:20 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobQuery (4.15s)
=== RUN   TestHTTP_JobUpdate
2016/04/14 12:59:20 [INFO] serf: EventMemberJoin: Node 17081.global 127.0.0.1
2016/04/14 12:59:20 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:20 [INFO] client: using state directory /tmp/NomadClient762528902
2016/04/14 12:59:20 [INFO] client: using alloc directory /tmp/NomadClient907726381
2016/04/14 12:59:20 [INFO] raft: Node at 127.0.0.1:17081 [Leader] entering Leader state
2016/04/14 12:59:20 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:20 [DEBUG] raft: Node 127.0.0.1:17081 updated peer set (2): [127.0.0.1:17081]
2016/04/14 12:59:20 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:20 [INFO] nomad: adding server Node 17081.global (Addr: 127.0.0.1:17081) (DC: dc1)
2016/04/14 12:59:20 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:20 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:22 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:24 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:24 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:24 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:24 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:24 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:24 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:24 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:24 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:24 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:24 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:24 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:24 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:24 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:24 [INFO] client: setting server address list: []
2016/04/14 12:59:24 [DEBUG] client: node registration complete
2016/04/14 12:59:24 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:24 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:24 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:24 [DEBUG] client: state updated to ready
2016/04/14 12:59:24 [DEBUG] http: Shutting down http server
2016/04/14 12:59:24 [INFO] agent: requesting shutdown
2016/04/14 12:59:25 [INFO] client: shutting down
2016/04/14 12:59:25 [INFO] nomad: shutting down server
2016/04/14 12:59:25 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:25 [DEBUG] worker: dequeued evaluation 8e6ed3a1-48e8-6c47-1237-d5962c756af4
2016/04/14 12:59:25 [ERR] worker: failed to nack evaluation '8e6ed3a1-48e8-6c47-1237-d5962c756af4': No cluster leader
2016/04/14 12:59:25 [INFO] consul: shutting down consul service
2016/04/14 12:59:25 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobUpdate (4.06s)
=== RUN   TestHTTP_JobDelete
2016/04/14 12:59:25 [INFO] serf: EventMemberJoin: Node 17084.global 127.0.0.1
2016/04/14 12:59:25 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:25 [INFO] raft: Node at 127.0.0.1:17084 [Leader] entering Leader state
2016/04/14 12:59:25 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:25 [DEBUG] raft: Node 127.0.0.1:17084 updated peer set (2): [127.0.0.1:17084]
2016/04/14 12:59:25 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:25 [INFO] nomad: adding server Node 17084.global (Addr: 127.0.0.1:17084) (DC: dc1)
2016/04/14 12:59:25 [INFO] client: using state directory /tmp/NomadClient021930727
2016/04/14 12:59:25 [INFO] client: using alloc directory /tmp/NomadClient078283034
2016/04/14 12:59:25 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:25 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:27 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:29 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:29 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:29 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:29 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:29 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:29 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:29 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:29 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:29 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:29 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:29 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:29 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:29 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:29 [INFO] client: setting server address list: []
2016/04/14 12:59:29 [DEBUG] client: node registration complete
2016/04/14 12:59:29 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:29 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:29 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:29 [DEBUG] client: state updated to ready
2016/04/14 12:59:29 [DEBUG] http: Shutting down http server
2016/04/14 12:59:29 [INFO] agent: requesting shutdown
2016/04/14 12:59:29 [INFO] client: shutting down
2016/04/14 12:59:29 [INFO] nomad: shutting down server
2016/04/14 12:59:29 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:29 [DEBUG] worker: dequeued evaluation 97391e3d-2863-f278-f6ac-490abc940f3a
2016/04/14 12:59:29 [ERR] worker: failed to nack evaluation '97391e3d-2863-f278-f6ac-490abc940f3a': No cluster leader
2016/04/14 12:59:29 [INFO] consul: shutting down consul service
2016/04/14 12:59:29 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobDelete (4.06s)
=== RUN   TestHTTP_JobForceEvaluate
2016/04/14 12:59:29 [INFO] serf: EventMemberJoin: Node 17087.global 127.0.0.1
2016/04/14 12:59:29 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:29 [INFO] raft: Node at 127.0.0.1:17087 [Leader] entering Leader state
2016/04/14 12:59:29 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:29 [DEBUG] raft: Node 127.0.0.1:17087 updated peer set (2): [127.0.0.1:17087]
2016/04/14 12:59:29 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:29 [INFO] nomad: adding server Node 17087.global (Addr: 127.0.0.1:17087) (DC: dc1)
2016/04/14 12:59:29 [INFO] client: using state directory /tmp/NomadClient606161756
2016/04/14 12:59:29 [INFO] client: using alloc directory /tmp/NomadClient528134411
2016/04/14 12:59:29 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:29 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:31 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:33 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:33 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:33 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:33 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:33 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:33 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:33 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:33 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:33 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:33 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:33 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:33 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:33 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:33 [INFO] client: setting server address list: []
2016/04/14 12:59:33 [DEBUG] client: node registration complete
2016/04/14 12:59:33 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:33 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:33 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:33 [DEBUG] client: state updated to ready
2016/04/14 12:59:33 [DEBUG] worker: dequeued evaluation d8ef8088-9165-dd64-abc8-701caa9cd0d9
2016/04/14 12:59:33 [DEBUG] sched: <Eval 'd8ef8088-9165-dd64-abc8-701caa9cd0d9' JobID: 'd6d1b9c7-48b0-2e0a-171d-b9adf78dedaf'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:59:33 [DEBUG] worker: created evaluation <Eval '00fae905-fb0d-0117-6628-a7599d1c10ac' JobID: 'd6d1b9c7-48b0-2e0a-171d-b9adf78dedaf'>
2016/04/14 12:59:33 [DEBUG] sched: <Eval 'd8ef8088-9165-dd64-abc8-701caa9cd0d9' JobID: 'd6d1b9c7-48b0-2e0a-171d-b9adf78dedaf'>: failed to place all allocations, blocked eval '00fae905-fb0d-0117-6628-a7599d1c10ac' created
2016/04/14 12:59:33 [DEBUG] http: Shutting down http server
2016/04/14 12:59:33 [INFO] agent: requesting shutdown
2016/04/14 12:59:33 [INFO] client: shutting down
2016/04/14 12:59:33 [INFO] nomad: shutting down server
2016/04/14 12:59:33 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:33 [ERR] nomad: failed to apply plan: leadership lost while committing log
2016/04/14 12:59:33 [ERR] worker: failed to submit plan for evaluation d8ef8088-9165-dd64-abc8-701caa9cd0d9: leadership lost while committing log
2016/04/14 12:59:33 [ERR] worker: failed to nack evaluation 'd8ef8088-9165-dd64-abc8-701caa9cd0d9': No cluster leader
2016/04/14 12:59:33 [INFO] consul: shutting down consul service
2016/04/14 12:59:33 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobForceEvaluate (4.07s)
=== RUN   TestHTTP_JobEvaluations
2016/04/14 12:59:33 [INFO] serf: EventMemberJoin: Node 17090.global 127.0.0.1
2016/04/14 12:59:33 [INFO] nomad: starting 4 scheduling worker(s) for [batch system service _core]
2016/04/14 12:59:33 [INFO] client: using state directory /tmp/NomadClient730242933
2016/04/14 12:59:33 [INFO] client: using alloc directory /tmp/NomadClient030951504
2016/04/14 12:59:33 [INFO] raft: Node at 127.0.0.1:17090 [Leader] entering Leader state
2016/04/14 12:59:33 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:33 [DEBUG] raft: Node 127.0.0.1:17090 updated peer set (2): [127.0.0.1:17090]
2016/04/14 12:59:33 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:33 [INFO] nomad: adding server Node 17090.global (Addr: 127.0.0.1:17090) (DC: dc1)
2016/04/14 12:59:33 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:33 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:35 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:37 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:37 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:37 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:37 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:37 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:37 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:37 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:37 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:37 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:37 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:37 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:37 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:37 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:37 [INFO] client: setting server address list: []
2016/04/14 12:59:37 [DEBUG] client: node registration complete
2016/04/14 12:59:37 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:37 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:37 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:37 [DEBUG] client: state updated to ready
2016/04/14 12:59:37 [DEBUG] http: Shutting down http server
2016/04/14 12:59:37 [INFO] agent: requesting shutdown
2016/04/14 12:59:37 [INFO] client: shutting down
2016/04/14 12:59:37 [INFO] nomad: shutting down server
2016/04/14 12:59:37 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:37 [DEBUG] worker: dequeued evaluation 25f9d90a-2678-9713-9a2c-718a2ec6dd7e
2016/04/14 12:59:37 [ERR] worker: failed to nack evaluation '25f9d90a-2678-9713-9a2c-718a2ec6dd7e': No cluster leader
2016/04/14 12:59:37 [INFO] consul: shutting down consul service
2016/04/14 12:59:37 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobEvaluations (4.07s)
=== RUN   TestHTTP_JobAllocations
2016/04/14 12:59:37 [INFO] serf: EventMemberJoin: Node 17093.global 127.0.0.1
2016/04/14 12:59:37 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:37 [INFO] raft: Node at 127.0.0.1:17093 [Leader] entering Leader state
2016/04/14 12:59:37 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:37 [DEBUG] raft: Node 127.0.0.1:17093 updated peer set (2): [127.0.0.1:17093]
2016/04/14 12:59:37 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:37 [INFO] nomad: adding server Node 17093.global (Addr: 127.0.0.1:17093) (DC: dc1)
2016/04/14 12:59:37 [INFO] client: using state directory /tmp/NomadClient786542594
2016/04/14 12:59:37 [INFO] client: using alloc directory /tmp/NomadClient037778809
2016/04/14 12:59:37 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:37 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:39 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:41 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:41 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:41 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:41 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:41 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:41 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:41 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:41 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:41 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:41 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:41 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:41 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:41 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:41 [INFO] client: setting server address list: []
2016/04/14 12:59:41 [DEBUG] client: node registration complete
2016/04/14 12:59:41 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:41 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:41 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:41 [DEBUG] client: state updated to ready
2016/04/14 12:59:41 [DEBUG] http: Shutting down http server
2016/04/14 12:59:41 [INFO] agent: requesting shutdown
2016/04/14 12:59:41 [INFO] client: shutting down
2016/04/14 12:59:41 [INFO] nomad: shutting down server
2016/04/14 12:59:41 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:41 [DEBUG] worker: dequeued evaluation 8bb10d45-ea81-dcde-a3ee-905c1215aaa5
2016/04/14 12:59:41 [ERR] worker: failed to nack evaluation '8bb10d45-ea81-dcde-a3ee-905c1215aaa5': No cluster leader
2016/04/14 12:59:41 [INFO] consul: shutting down consul service
2016/04/14 12:59:41 [INFO] agent: shutdown complete
--- PASS: TestHTTP_JobAllocations (4.06s)
=== RUN   TestHTTP_PeriodicForce
2016/04/14 12:59:41 [INFO] serf: EventMemberJoin: Node 17096.global 127.0.0.1
2016/04/14 12:59:41 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:41 [INFO] client: using state directory /tmp/NomadClient139959827
2016/04/14 12:59:41 [INFO] client: using alloc directory /tmp/NomadClient697872982
2016/04/14 12:59:41 [INFO] raft: Node at 127.0.0.1:17096 [Leader] entering Leader state
2016/04/14 12:59:41 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:41 [DEBUG] raft: Node 127.0.0.1:17096 updated peer set (2): [127.0.0.1:17096]
2016/04/14 12:59:41 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:41 [INFO] nomad: adding server Node 17096.global (Addr: 127.0.0.1:17096) (DC: dc1)
2016/04/14 12:59:41 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:41 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:43 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:45 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:45 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:45 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:45 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:45 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:45 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:45 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:45 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:45 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:45 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:45 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:45 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:45 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:45 [INFO] client: setting server address list: []
2016/04/14 12:59:45 [DEBUG] client: node registration complete
2016/04/14 12:59:45 [DEBUG] client: state updated to ready
2016/04/14 12:59:45 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:45 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:45 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:45 [DEBUG] nomad.periodic: registered periodic job "039a6173-06ef-9d57-1ddc-5eecffaa1ffb"
2016/04/14 12:59:45 [DEBUG] http: Shutting down http server
2016/04/14 12:59:45 [INFO] agent: requesting shutdown
2016/04/14 12:59:45 [INFO] client: shutting down
2016/04/14 12:59:45 [INFO] nomad: shutting down server
2016/04/14 12:59:45 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:45 [DEBUG] nomad.periodic: launching job "039a6173-06ef-9d57-1ddc-5eecffaa1ffb" in 14.543284418s
2016/04/14 12:59:45 [DEBUG] worker: dequeued evaluation f33760a8-acc9-38ac-438d-aaf21b3f6ae0
2016/04/14 12:59:45 [ERR] worker: failed to nack evaluation 'f33760a8-acc9-38ac-438d-aaf21b3f6ae0': No cluster leader
2016/04/14 12:59:45 [INFO] consul: shutting down consul service
2016/04/14 12:59:45 [INFO] agent: shutdown complete
--- PASS: TestHTTP_PeriodicForce (4.18s)
=== RUN   TestLogWriter
--- PASS: TestLogWriter (0.00s)
=== RUN   TestHTTP_NodesList
2016/04/14 12:59:45 [INFO] serf: EventMemberJoin: Node 17099.global 127.0.0.1
2016/04/14 12:59:45 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:45 [INFO] raft: Node at 127.0.0.1:17099 [Leader] entering Leader state
2016/04/14 12:59:45 [INFO] client: using state directory /tmp/NomadClient613055480
2016/04/14 12:59:45 [INFO] client: using alloc directory /tmp/NomadClient820461815
2016/04/14 12:59:45 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:45 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:45 [INFO] nomad: adding server Node 17099.global (Addr: 127.0.0.1:17099) (DC: dc1)
2016/04/14 12:59:45 [DEBUG] raft: Node 127.0.0.1:17099 updated peer set (2): [127.0.0.1:17099]
2016/04/14 12:59:45 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:45 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:47 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:49 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:49 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:49 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:49 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:49 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:49 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:49 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:49 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:49 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:49 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:49 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:49 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:49 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:49 [INFO] client: setting server address list: []
2016/04/14 12:59:49 [DEBUG] client: node registration complete
2016/04/14 12:59:49 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:49 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:49 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:49 [DEBUG] client: state updated to ready
2016/04/14 12:59:49 [DEBUG] http: Shutting down http server
2016/04/14 12:59:49 [INFO] agent: requesting shutdown
2016/04/14 12:59:49 [INFO] client: shutting down
2016/04/14 12:59:49 [INFO] nomad: shutting down server
2016/04/14 12:59:49 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:49 [INFO] consul: shutting down consul service
2016/04/14 12:59:49 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodesList (4.07s)
=== RUN   TestHTTP_NodesPrefixList
2016/04/14 12:59:49 [INFO] serf: EventMemberJoin: Node 17102.global 127.0.0.1
2016/04/14 12:59:49 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:49 [INFO] raft: Node at 127.0.0.1:17102 [Leader] entering Leader state
2016/04/14 12:59:49 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:49 [DEBUG] raft: Node 127.0.0.1:17102 updated peer set (2): [127.0.0.1:17102]
2016/04/14 12:59:49 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:49 [INFO] nomad: adding server Node 17102.global (Addr: 127.0.0.1:17102) (DC: dc1)
2016/04/14 12:59:49 [INFO] client: using state directory /tmp/NomadClient709802561
2016/04/14 12:59:49 [INFO] client: using alloc directory /tmp/NomadClient582665900
2016/04/14 12:59:49 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:49 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:51 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:53 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:53 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:53 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:53 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:53 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:53 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:53 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:53 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:53 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:53 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:53 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:53 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:53 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:53 [INFO] client: setting server address list: []
2016/04/14 12:59:53 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:53 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:53 [DEBUG] client: node registration complete
2016/04/14 12:59:53 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:53 [DEBUG] client: state updated to ready
2016/04/14 12:59:53 [DEBUG] http: Shutting down http server
2016/04/14 12:59:53 [INFO] agent: requesting shutdown
2016/04/14 12:59:53 [INFO] client: shutting down
2016/04/14 12:59:53 [INFO] nomad: shutting down server
2016/04/14 12:59:53 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:53 [INFO] consul: shutting down consul service
2016/04/14 12:59:53 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodesPrefixList (4.08s)
=== RUN   TestHTTP_NodeForceEval
2016/04/14 12:59:53 [INFO] serf: EventMemberJoin: Node 17105.global 127.0.0.1
2016/04/14 12:59:53 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:53 [INFO] client: using state directory /tmp/NomadClient735031486
2016/04/14 12:59:53 [INFO] client: using alloc directory /tmp/NomadClient863798533
2016/04/14 12:59:53 [INFO] raft: Node at 127.0.0.1:17105 [Leader] entering Leader state
2016/04/14 12:59:53 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:53 [DEBUG] raft: Node 127.0.0.1:17105 updated peer set (2): [127.0.0.1:17105]
2016/04/14 12:59:53 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:53 [INFO] nomad: adding server Node 17105.global (Addr: 127.0.0.1:17105) (DC: dc1)
2016/04/14 12:59:53 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:53 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:55 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 12:59:57 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 12:59:57 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 12:59:57 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 12:59:57 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 12:59:57 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 12:59:57 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 12:59:57 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 12:59:57 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 12:59:57 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 12:59:57 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 12:59:57 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 12:59:57 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 12:59:57 [DEBUG] client: available drivers [raw_exec]
2016/04/14 12:59:57 [INFO] client: setting server address list: []
2016/04/14 12:59:57 [DEBUG] client: node registration complete
2016/04/14 12:59:57 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 12:59:57 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 12:59:57 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 12:59:57 [DEBUG] client: state updated to ready
2016/04/14 12:59:57 [DEBUG] http: Shutting down http server
2016/04/14 12:59:57 [INFO] agent: requesting shutdown
2016/04/14 12:59:57 [INFO] client: shutting down
2016/04/14 12:59:57 [INFO] nomad: shutting down server
2016/04/14 12:59:57 [WARN] serf: Shutdown without a Leave
2016/04/14 12:59:57 [DEBUG] worker: dequeued evaluation 6c89135e-7eca-b77a-c3a6-4a070d1fc71a
2016/04/14 12:59:57 [ERR] worker: failed to nack evaluation '6c89135e-7eca-b77a-c3a6-4a070d1fc71a': No cluster leader
2016/04/14 12:59:57 [INFO] consul: shutting down consul service
2016/04/14 12:59:57 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodeForceEval (4.06s)
=== RUN   TestHTTP_NodeAllocations
2016/04/14 12:59:57 [INFO] serf: EventMemberJoin: Node 17108.global 127.0.0.1
2016/04/14 12:59:57 [INFO] raft: Node at 127.0.0.1:17108 [Leader] entering Leader state
2016/04/14 12:59:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:59:57 [DEBUG] raft: Node 127.0.0.1:17108 updated peer set (2): [127.0.0.1:17108]
2016/04/14 12:59:57 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 12:59:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:59:57 [INFO] nomad: adding server Node 17108.global (Addr: 127.0.0.1:17108) (DC: dc1)
2016/04/14 12:59:57 [INFO] client: using state directory /tmp/NomadClient230675327
2016/04/14 12:59:57 [INFO] client: using alloc directory /tmp/NomadClient081283794
2016/04/14 12:59:57 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 12:59:57 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 12:59:59 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:01 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:01 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:01 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:01 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:01 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:01 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:01 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:01 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:01 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:01 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:01 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:01 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:01 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:01 [INFO] client: setting server address list: []
2016/04/14 13:00:01 [DEBUG] client: node registration complete
2016/04/14 13:00:01 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:01 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:01 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:01 [DEBUG] client: state updated to ready
2016/04/14 13:00:01 [DEBUG] http: Shutting down http server
2016/04/14 13:00:01 [INFO] agent: requesting shutdown
2016/04/14 13:00:01 [INFO] client: shutting down
2016/04/14 13:00:01 [INFO] nomad: shutting down server
2016/04/14 13:00:01 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:01 [INFO] consul: shutting down consul service
2016/04/14 13:00:01 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodeAllocations (4.06s)
=== RUN   TestHTTP_NodeDrain
2016/04/14 13:00:01 [INFO] serf: EventMemberJoin: Node 17111.global 127.0.0.1
2016/04/14 13:00:01 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:01 [INFO] client: using state directory /tmp/NomadClient894581716
2016/04/14 13:00:01 [INFO] raft: Node at 127.0.0.1:17111 [Leader] entering Leader state
2016/04/14 13:00:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:01 [DEBUG] raft: Node 127.0.0.1:17111 updated peer set (2): [127.0.0.1:17111]
2016/04/14 13:00:01 [INFO] client: using alloc directory /tmp/NomadClient202608419
2016/04/14 13:00:01 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:01 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:01 [INFO] nomad: adding server Node 17111.global (Addr: 127.0.0.1:17111) (DC: dc1)
2016/04/14 13:00:01 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:03 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:05 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:05 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:05 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:05 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:05 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:05 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:05 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:05 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:05 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:05 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:05 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:05 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:05 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:05 [INFO] client: setting server address list: []
2016/04/14 13:00:05 [DEBUG] client: node registration complete
2016/04/14 13:00:05 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:05 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:05 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:05 [DEBUG] client: state updated to ready
2016/04/14 13:00:05 [DEBUG] http: Shutting down http server
2016/04/14 13:00:05 [INFO] agent: requesting shutdown
2016/04/14 13:00:05 [INFO] client: shutting down
2016/04/14 13:00:05 [INFO] nomad: shutting down server
2016/04/14 13:00:05 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:05 [DEBUG] worker: dequeued evaluation b6722d82-b5b0-492a-6105-d3d781cd507d
2016/04/14 13:00:05 [ERR] worker: failed to nack evaluation 'b6722d82-b5b0-492a-6105-d3d781cd507d': No cluster leader
2016/04/14 13:00:05 [INFO] consul: shutting down consul service
2016/04/14 13:00:05 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodeDrain (4.06s)
=== RUN   TestHTTP_NodeQuery
2016/04/14 13:00:05 [INFO] serf: EventMemberJoin: Node 17114.global 127.0.0.1
2016/04/14 13:00:05 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:05 [INFO] raft: Node at 127.0.0.1:17114 [Leader] entering Leader state
2016/04/14 13:00:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:05 [DEBUG] raft: Node 127.0.0.1:17114 updated peer set (2): [127.0.0.1:17114]
2016/04/14 13:00:05 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:05 [INFO] nomad: adding server Node 17114.global (Addr: 127.0.0.1:17114) (DC: dc1)
2016/04/14 13:00:05 [INFO] client: using state directory /tmp/NomadClient668260685
2016/04/14 13:00:05 [INFO] client: using alloc directory /tmp/NomadClient064982600
2016/04/14 13:00:05 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:05 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:07 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:09 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:09 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:09 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:09 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:09 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:09 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:09 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:09 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:09 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:09 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:09 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:09 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:09 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:09 [INFO] client: setting server address list: []
2016/04/14 13:00:09 [DEBUG] client: node registration complete
2016/04/14 13:00:09 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:09 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:09 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:09 [DEBUG] client: state updated to ready
2016/04/14 13:00:09 [DEBUG] http: Shutting down http server
2016/04/14 13:00:09 [INFO] agent: requesting shutdown
2016/04/14 13:00:09 [INFO] client: shutting down
2016/04/14 13:00:09 [INFO] nomad: shutting down server
2016/04/14 13:00:09 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:09 [INFO] consul: shutting down consul service
2016/04/14 13:00:09 [INFO] agent: shutdown complete
--- PASS: TestHTTP_NodeQuery (4.06s)
=== RUN   TestHTTP_RegionList
2016/04/14 13:00:09 [INFO] serf: EventMemberJoin: Node 17117.global 127.0.0.1
2016/04/14 13:00:09 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:09 [INFO] client: using state directory /tmp/NomadClient109198010
2016/04/14 13:00:09 [INFO] client: using alloc directory /tmp/NomadClient650033361
2016/04/14 13:00:09 [INFO] raft: Node at 127.0.0.1:17117 [Leader] entering Leader state
2016/04/14 13:00:09 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:09 [DEBUG] raft: Node 127.0.0.1:17117 updated peer set (2): [127.0.0.1:17117]
2016/04/14 13:00:10 [INFO] nomad: adding server Node 17117.global (Addr: 127.0.0.1:17117) (DC: dc1)
2016/04/14 13:00:10 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:10 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:10 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:12 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:14 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:14 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:14 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:14 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:14 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:14 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:14 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:14 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:14 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:14 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:14 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:14 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:14 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:14 [INFO] client: setting server address list: []
2016/04/14 13:00:14 [DEBUG] client: node registration complete
2016/04/14 13:00:14 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:14 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:14 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:14 [DEBUG] client: state updated to ready
2016/04/14 13:00:14 [DEBUG] http: Shutting down http server
2016/04/14 13:00:14 [INFO] agent: requesting shutdown
2016/04/14 13:00:14 [INFO] client: shutting down
2016/04/14 13:00:14 [INFO] nomad: shutting down server
2016/04/14 13:00:14 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:14 [INFO] consul: shutting down consul service
2016/04/14 13:00:14 [INFO] agent: shutdown complete
--- PASS: TestHTTP_RegionList (4.19s)
=== RUN   TestProviderService
--- PASS: TestProviderService (0.00s)
=== RUN   TestProviderConfig
--- PASS: TestProviderConfig (0.00s)
=== RUN   TestSCADAListener
--- PASS: TestSCADAListener (0.00s)
=== RUN   TestSCADAAddr
--- PASS: TestSCADAAddr (0.00s)
=== RUN   TestHTTP_StatusLeader
2016/04/14 13:00:14 [INFO] serf: EventMemberJoin: Node 17120.global 127.0.0.1
2016/04/14 13:00:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:14 [INFO] raft: Node at 127.0.0.1:17120 [Leader] entering Leader state
2016/04/14 13:00:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:14 [DEBUG] raft: Node 127.0.0.1:17120 updated peer set (2): [127.0.0.1:17120]
2016/04/14 13:00:14 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:14 [INFO] nomad: adding server Node 17120.global (Addr: 127.0.0.1:17120) (DC: dc1)
2016/04/14 13:00:14 [INFO] client: using state directory /tmp/NomadClient991534379
2016/04/14 13:00:14 [INFO] client: using alloc directory /tmp/NomadClient311875214
2016/04/14 13:00:14 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:14 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:16 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:18 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:18 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:18 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:18 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:18 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:18 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:18 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:18 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:18 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:18 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:18 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:18 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:18 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:18 [INFO] client: setting server address list: []
2016/04/14 13:00:18 [DEBUG] client: node registration complete
2016/04/14 13:00:18 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:18 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:18 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:18 [DEBUG] client: state updated to ready
2016/04/14 13:00:18 [DEBUG] http: Shutting down http server
2016/04/14 13:00:18 [INFO] agent: requesting shutdown
2016/04/14 13:00:18 [INFO] client: shutting down
2016/04/14 13:00:18 [INFO] nomad: shutting down server
2016/04/14 13:00:18 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:18 [INFO] consul: shutting down consul service
2016/04/14 13:00:18 [INFO] agent: shutdown complete
--- PASS: TestHTTP_StatusLeader (4.06s)
=== RUN   TestHTTP_StatusPeers
2016/04/14 13:00:18 [INFO] serf: EventMemberJoin: Node 17123.global 127.0.0.1
2016/04/14 13:00:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:18 [INFO] raft: Node at 127.0.0.1:17123 [Leader] entering Leader state
2016/04/14 13:00:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:18 [DEBUG] raft: Node 127.0.0.1:17123 updated peer set (2): [127.0.0.1:17123]
2016/04/14 13:00:18 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:18 [INFO] nomad: adding server Node 17123.global (Addr: 127.0.0.1:17123) (DC: dc1)
2016/04/14 13:00:18 [INFO] client: using state directory /tmp/NomadClient930351856
2016/04/14 13:00:18 [INFO] client: using alloc directory /tmp/NomadClient363267471
2016/04/14 13:00:18 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:18 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:20 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:22 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:22 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:22 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:22 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:22 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:22 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:22 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:22 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:22 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:22 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:22 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:22 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:22 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:22 [INFO] client: setting server address list: []
2016/04/14 13:00:22 [DEBUG] client: node registration complete
2016/04/14 13:00:22 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:22 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:22 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:22 [DEBUG] client: state updated to ready
2016/04/14 13:00:22 [DEBUG] http: Shutting down http server
2016/04/14 13:00:22 [INFO] agent: requesting shutdown
2016/04/14 13:00:22 [INFO] client: shutting down
2016/04/14 13:00:22 [INFO] nomad: shutting down server
2016/04/14 13:00:22 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:22 [INFO] consul: shutting down consul service
2016/04/14 13:00:22 [INFO] nomad: cluster leadership lost
2016/04/14 13:00:22 [INFO] agent: shutdown complete
--- PASS: TestHTTP_StatusPeers (4.06s)
=== RUN   TestSyslogFilter
--- FAIL: TestSyslogFilter (0.00s)
	syslog_test.go:22: err: Unix syslog delivery error
=== RUN   TestHTTP_SystemGarbageCollect
2016/04/14 13:00:22 [INFO] serf: EventMemberJoin: Node 17126.global 127.0.0.1
2016/04/14 13:00:22 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system _core]
2016/04/14 13:00:22 [INFO] raft: Node at 127.0.0.1:17126 [Leader] entering Leader state
2016/04/14 13:00:22 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 13:00:22 [DEBUG] raft: Node 127.0.0.1:17126 updated peer set (2): [127.0.0.1:17126]
2016/04/14 13:00:22 [INFO] nomad: cluster leadership acquired
2016/04/14 13:00:22 [INFO] nomad: adding server Node 17126.global (Addr: 127.0.0.1:17126) (DC: dc1)
2016/04/14 13:00:22 [INFO] client: using state directory /tmp/NomadClient640698009
2016/04/14 13:00:22 [INFO] client: using alloc directory /tmp/NomadClient307868964
2016/04/14 13:00:22 [DEBUG] client: periodically fingerprinting cgroup at duration 15s
2016/04/14 13:00:22 [DEBUG] client: periodically fingerprinting consul at duration 15s
2016/04/14 13:00:24 [DEBUG] fingerprint.env_aws: Error querying AWS Metadata URL, skipping
2016/04/14 13:00:26 [WARN]: fingerprint.env_gce: Could not read value for attribute "machine-type"
2016/04/14 13:00:26 [DEBUG] fingerprint.env_gce: Error querying GCE Metadata URL, skipping
2016/04/14 13:00:26 [DEBUG] fingerprint.network: Detected interface lo  with IP 127.0.0.1 during fingerprinting
2016/04/14 13:00:26 [WARN] fingerprint.network: Unable to read link speed from /sys/class/net/lo/speed
2016/04/14 13:00:26 [DEBUG] fingerprint.network: Unable to read link speed; setting to default 100
2016/04/14 13:00:26 [DEBUG] client: applied fingerprints [arch cgroup cpu host memory network storage]
2016/04/14 13:00:26 [WARN] driver.raw_exec: raw exec is enabled. Only enable if needed
2016/04/14 13:00:26 [DEBUG] driver.java: must run as root user on linux, disabling
2016/04/14 13:00:26 [DEBUG] driver.rkt: must run as root user, disabling
2016/04/14 13:00:26 [DEBUG] driver.docker: privileged containers are disabled
2016/04/14 13:00:26 [DEBUG] driver.docker: could not connect to docker daemon at unix:///var/run/docker.sock: Get http://unix.sock/version: dial unix /var/run/docker.sock: connect: no such file or directory
2016/04/14 13:00:26 [DEBUG] driver.exec: cgroups unavailable, disabling
2016/04/14 13:00:26 [DEBUG] client: available drivers [raw_exec]
2016/04/14 13:00:26 [INFO] client: setting server address list: []
2016/04/14 13:00:26 [DEBUG] client: node registration complete
2016/04/14 13:00:26 [DEBUG] client: updated allocations at index 1 (pulled 0) (filtered 0)
2016/04/14 13:00:26 [DEBUG] client: allocs: (added 0) (removed 0) (updated 0) (ignore 0)
2016/04/14 13:00:26 [DEBUG] client: periodically checking for node changes at duration 5s
2016/04/14 13:00:26 [DEBUG] client: state updated to ready
2016/04/14 13:00:26 [DEBUG] http: Shutting down http server
2016/04/14 13:00:26 [INFO] agent: requesting shutdown
2016/04/14 13:00:26 [INFO] client: shutting down
2016/04/14 13:00:26 [INFO] nomad: shutting down server
2016/04/14 13:00:26 [WARN] serf: Shutdown without a Leave
2016/04/14 13:00:26 [DEBUG] worker: dequeued evaluation 6f24e7cf-ac91-53da-d2c9-60ba037e34b4
2016/04/14 13:00:26 [ERR] worker: failed to nack evaluation '6f24e7cf-ac91-53da-d2c9-60ba037e34b4': No cluster leader
2016/04/14 13:00:26 [INFO] consul: shutting down consul service
2016/04/14 13:00:26 [INFO] agent: shutdown complete
--- PASS: TestHTTP_SystemGarbageCollect (4.06s)
=== RUN   TestRandomStagger
--- PASS: TestRandomStagger (0.00s)
FAIL
FAIL	github.com/hashicorp/nomad/command/agent	173.415s
=== RUN   TestArgs_ReplaceEnv_Invalid
--- PASS: TestArgs_ReplaceEnv_Invalid (0.00s)
=== RUN   TestArgs_ReplaceEnv_Valid
--- PASS: TestArgs_ReplaceEnv_Valid (0.00s)
=== RUN   TestArgs_ReplaceEnv_Period
--- PASS: TestArgs_ReplaceEnv_Period (0.00s)
=== RUN   TestArgs_ReplaceEnv_Dash
--- PASS: TestArgs_ReplaceEnv_Dash (0.00s)
=== RUN   TestArgs_ReplaceEnv_Chained
--- PASS: TestArgs_ReplaceEnv_Chained (0.00s)
PASS
ok  	github.com/hashicorp/nomad/helper/args	0.025s
?   	github.com/hashicorp/nomad/helper/discover	[no test files]
=== RUN   TestStringFlag_implements
--- PASS: TestStringFlag_implements (0.00s)
=== RUN   TestStringFlagSet
--- PASS: TestStringFlagSet (0.00s)
PASS
ok  	github.com/hashicorp/nomad/helper/flag-slice	0.022s
=== RUN   TestWriter_impl
--- PASS: TestWriter_impl (0.00s)
=== RUN   TestWriter
--- PASS: TestWriter (0.00s)
PASS
ok  	github.com/hashicorp/nomad/helper/gated-writer	0.019s
?   	github.com/hashicorp/nomad/helper/testtask	[no test files]
=== RUN   TestParse
--- FAIL: TestParse (0.00s)
	parse_test.go:326: Testing parse: basic.hcl
	parse_test.go:336: file: basic.hcl
		
		open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/jobspec/test-fixtures/basic.hcl: no such file or directory
=== RUN   TestBadPorts
--- FAIL: TestBadPorts (0.00s)
	parse_test.go:355: 
		Expected error
		  Port label does not conform to naming requirements ^[a-zA-Z0-9_]+$
		got
		  open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/jobspec/test-fixtures/bad-ports.hcl: no such file or directory
=== RUN   TestOverlappingPorts
--- FAIL: TestOverlappingPorts (0.00s)
	parse_test.go:372: Expected collision error; got open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/jobspec/test-fixtures/overlapping-ports.hcl: no such file or directory
=== RUN   TestIncompleteServiceDefn
--- FAIL: TestIncompleteServiceDefn (0.00s)
	parse_test.go:389: Expected collision error; got open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/jobspec/test-fixtures/incorrect-service-def.hcl: no such file or directory
=== RUN   TestIncorrectKey
--- FAIL: TestIncorrectKey (0.00s)
	parse_test.go:406: Expected collision error; got open /<<BUILDDIR>>/nomad-0.3.1+dfsg/obj-arm-linux-gnueabihf/src/github.com/hashicorp/nomad/jobspec/test-fixtures/basic_wrong_key.hcl: no such file or directory
FAIL
FAIL	github.com/hashicorp/nomad/jobspec	0.065s
=== RUN   TestAllocEndpoint_List
2016/04/14 12:57:57 [INFO] serf: EventMemberJoin: Node 15001.global 127.0.0.1
2016/04/14 12:57:57 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:57 [INFO] raft: Node at 127.0.0.1:15001 [Leader] entering Leader state
2016/04/14 12:57:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:57 [DEBUG] raft: Node 127.0.0.1:15001 updated peer set (2): [127.0.0.1:15001]
2016/04/14 12:57:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:57 [INFO] nomad: adding server Node 15001.global (Addr: 127.0.0.1:15001) (DC: dc1)
2016/04/14 12:57:57 [INFO] nomad: shutting down server
2016/04/14 12:57:57 [WARN] serf: Shutdown without a Leave
--- PASS: TestAllocEndpoint_List (0.03s)
=== RUN   TestAllocEndpoint_List_Blocking
2016/04/14 12:57:57 [INFO] serf: EventMemberJoin: Node 15003.global 127.0.0.1
2016/04/14 12:57:57 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:57:57 [INFO] raft: Node at 127.0.0.1:15003 [Leader] entering Leader state
2016/04/14 12:57:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:57 [DEBUG] raft: Node 127.0.0.1:15003 updated peer set (2): [127.0.0.1:15003]
2016/04/14 12:57:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:57 [INFO] nomad: adding server Node 15003.global (Addr: 127.0.0.1:15003) (DC: dc1)
2016/04/14 12:57:57 [INFO] nomad: shutting down server
2016/04/14 12:57:57 [WARN] serf: Shutdown without a Leave
--- PASS: TestAllocEndpoint_List_Blocking (0.23s)
=== RUN   TestAllocEndpoint_GetAlloc
2016/04/14 12:57:57 [INFO] serf: EventMemberJoin: Node 15005.global 127.0.0.1
2016/04/14 12:57:57 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:57:57 [INFO] raft: Node at 127.0.0.1:15005 [Leader] entering Leader state
2016/04/14 12:57:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:57 [DEBUG] raft: Node 127.0.0.1:15005 updated peer set (2): [127.0.0.1:15005]
2016/04/14 12:57:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:57 [INFO] nomad: adding server Node 15005.global (Addr: 127.0.0.1:15005) (DC: dc1)
2016/04/14 12:57:57 [INFO] nomad: shutting down server
2016/04/14 12:57:57 [WARN] serf: Shutdown without a Leave
--- PASS: TestAllocEndpoint_GetAlloc (0.03s)
=== RUN   TestAllocEndpoint_GetAlloc_Blocking
2016/04/14 12:57:57 [INFO] serf: EventMemberJoin: Node 15007.global 127.0.0.1
2016/04/14 12:57:57 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:57:57 [INFO] raft: Node at 127.0.0.1:15007 [Leader] entering Leader state
2016/04/14 12:57:57 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:57 [DEBUG] raft: Node 127.0.0.1:15007 updated peer set (2): [127.0.0.1:15007]
2016/04/14 12:57:57 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:57 [INFO] nomad: adding server Node 15007.global (Addr: 127.0.0.1:15007) (DC: dc1)
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestAllocEndpoint_GetAlloc_Blocking (0.23s)
=== RUN   TestAllocEndpoint_GetAllocs
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15009.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15009 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15009 updated peer set (2): [127.0.0.1:15009]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15009.global (Addr: 127.0.0.1:15009) (DC: dc1)
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestAllocEndpoint_GetAllocs (0.03s)
=== RUN   TestBlockedEvals_Block_Disabled
--- PASS: TestBlockedEvals_Block_Disabled (0.00s)
=== RUN   TestBlockedEvals_Block_SameJob
--- PASS: TestBlockedEvals_Block_SameJob (0.00s)
=== RUN   TestBlockedEvals_GetDuplicates
--- PASS: TestBlockedEvals_GetDuplicates (0.50s)
=== RUN   TestBlockedEvals_UnblockEscaped
--- PASS: TestBlockedEvals_UnblockEscaped (0.01s)
=== RUN   TestBlockedEvals_UnblockEligible
--- PASS: TestBlockedEvals_UnblockEligible (0.01s)
=== RUN   TestBlockedEvals_UnblockIneligible
--- PASS: TestBlockedEvals_UnblockIneligible (0.01s)
=== RUN   TestBlockedEvals_UnblockUnknown
--- PASS: TestBlockedEvals_UnblockUnknown (0.01s)
=== RUN   TestCoreScheduler_EvalGC
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15011.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15011 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15011 updated peer set (2): [127.0.0.1:15011]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15011.global (Addr: 127.0.0.1:15011) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: eval GC: scanning before index 2000 (1h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: eval GC: 1 evaluations, 1 allocs eligible
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoreScheduler_EvalGC (0.03s)
=== RUN   TestCoreScheduler_EvalGC_Force
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15013.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15013 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15013 updated peer set (2): [127.0.0.1:15013]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15013.global (Addr: 127.0.0.1:15013) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: forced eval GC
2016/04/14 12:57:58 [DEBUG] sched.core: eval GC: scanning before index 18446744073709551615 (1h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: eval GC: 1 evaluations, 1 allocs eligible
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoreScheduler_EvalGC_Force (0.03s)
=== RUN   TestCoreScheduler_NodeGC
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15015.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15015 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15015 updated peer set (2): [127.0.0.1:15015]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15015.global (Addr: 127.0.0.1:15015) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: node GC: scanning before index 2000 (24h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: node GC: 1 nodes eligible
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoreScheduler_NodeGC (0.05s)
=== RUN   TestCoreScheduler_NodeGC_Force
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15017.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15017 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15017 updated peer set (2): [127.0.0.1:15017]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15017.global (Addr: 127.0.0.1:15017) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: forced node GC
2016/04/14 12:57:58 [DEBUG] sched.core: node GC: scanning before index 18446744073709551615 (24h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: node GC: 1 nodes eligible
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoreScheduler_NodeGC_Force (0.02s)
=== RUN   TestCoreScheduler_JobGC
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15019.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15019 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15019 updated peer set (2): [127.0.0.1:15019]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15019.global (Addr: 127.0.0.1:15019) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: scanning before index 2000 (4h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: 1 jobs, 1 evaluations, 1 allocs eligible
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15021.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15021 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15021 updated peer set (2): [127.0.0.1:15021]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15021.global (Addr: 127.0.0.1:15021) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: scanning before index 2000 (4h0m0s)
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15023.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15023 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15023 updated peer set (2): [127.0.0.1:15023]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15023.global (Addr: 127.0.0.1:15023) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: scanning before index 2000 (4h0m0s)
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:58 [INFO] nomad: shutting down server
2016/04/14 12:57:58 [WARN] serf: Shutdown without a Leave
--- PASS: TestCoreScheduler_JobGC (0.07s)
=== RUN   TestCoreScheduler_JobGC_Force
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15025.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15025 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15025 updated peer set (2): [127.0.0.1:15025]
2016/04/14 12:57:58 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:58 [INFO] nomad: adding server Node 15025.global (Addr: 127.0.0.1:15025) (DC: dc1)
2016/04/14 12:57:58 [DEBUG] sched.core: forced job GC
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: scanning before index 18446744073709551615 (4h0m0s)
2016/04/14 12:57:58 [DEBUG] sched.core: job GC: 1 jobs, 1 evaluations, 1 allocs eligible
2016/04/14 12:57:58 [INFO] serf: EventMemberJoin: Node 15027.global 127.0.0.1
2016/04/14 12:57:58 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:58 [INFO] raft: Node at 127.0.0.1:15027 [Leader] entering Leader state
2016/04/14 12:57:58 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:58 [DEBUG] raft: Node 127.0.0.1:15027 updated peer set (2): [127.0.0.1:15027]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15027.global (Addr: 127.0.0.1:15027) (DC: dc1)
2016/04/14 12:57:59 [DEBUG] sched.core: forced job GC
2016/04/14 12:57:59 [DEBUG] sched.core: job GC: scanning before index 18446744073709551615 (4h0m0s)
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15029.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15029 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15029 updated peer set (2): [127.0.0.1:15029]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15029.global (Addr: 127.0.0.1:15029) (DC: dc1)
2016/04/14 12:57:59 [DEBUG] sched.core: forced job GC
2016/04/14 12:57:59 [DEBUG] sched.core: job GC: scanning before index 18446744073709551615 (4h0m0s)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
2016/04/14 12:57:59 [INFO] nomad: cluster leadership lost
--- PASS: TestCoreScheduler_JobGC_Force (0.13s)
=== RUN   TestEvalBroker_Enqueue_Dequeue_Nack_Ack
--- PASS: TestEvalBroker_Enqueue_Dequeue_Nack_Ack (0.00s)
=== RUN   TestEvalBroker_Serialize_DuplicateJobID
--- PASS: TestEvalBroker_Serialize_DuplicateJobID (0.00s)
=== RUN   TestEvalBroker_Enqueue_Disable
--- PASS: TestEvalBroker_Enqueue_Disable (0.00s)
=== RUN   TestEvalBroker_Dequeue_Timeout
--- PASS: TestEvalBroker_Dequeue_Timeout (0.01s)
=== RUN   TestEvalBroker_Dequeue_Empty_Timeout
--- PASS: TestEvalBroker_Dequeue_Empty_Timeout (0.01s)
=== RUN   TestEvalBroker_Dequeue_Priority
--- PASS: TestEvalBroker_Dequeue_Priority (0.00s)
=== RUN   TestEvalBroker_Dequeue_FIFO
--- PASS: TestEvalBroker_Dequeue_FIFO (0.02s)
=== RUN   TestEvalBroker_Dequeue_Fairness
--- PASS: TestEvalBroker_Dequeue_Fairness (0.02s)
=== RUN   TestEvalBroker_Dequeue_Blocked
--- PASS: TestEvalBroker_Dequeue_Blocked (0.01s)
=== RUN   TestEvalBroker_Nack_Timeout
--- PASS: TestEvalBroker_Nack_Timeout (0.01s)
=== RUN   TestEvalBroker_Nack_TimeoutReset
--- PASS: TestEvalBroker_Nack_TimeoutReset (0.01s)
=== RUN   TestEvalBroker_PauseResumeNackTimeout
--- PASS: TestEvalBroker_PauseResumeNackTimeout (0.01s)
=== RUN   TestEvalBroker_DeliveryLimit
--- PASS: TestEvalBroker_DeliveryLimit (0.00s)
=== RUN   TestEvalBroker_AckAtDeliveryLimit
--- PASS: TestEvalBroker_AckAtDeliveryLimit (0.00s)
=== RUN   TestEvalBroker_Wait
--- PASS: TestEvalBroker_Wait (0.02s)
=== RUN   TestEvalEndpoint_GetEval
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15031 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15031 updated peer set (2): [127.0.0.1:15031]
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15031.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15031.global (Addr: 127.0.0.1:15031) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_GetEval (0.03s)
=== RUN   TestEvalEndpoint_GetEval_Blocking
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15033.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15033 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15033 updated peer set (2): [127.0.0.1:15033]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15033.global (Addr: 127.0.0.1:15033) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_GetEval_Blocking (0.33s)
=== RUN   TestEvalEndpoint_Dequeue
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15035.global 127.0.0.1
2016/04/14 12:57:59 [WARN] nomad: no enabled schedulers
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15035 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15035.global (Addr: 127.0.0.1:15035) (DC: dc1)
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15035 updated peer set (2): [127.0.0.1:15035]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Dequeue (0.03s)
=== RUN   TestEvalEndpoint_Ack
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15037.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15037 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15037 updated peer set (2): [127.0.0.1:15037]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15037.global (Addr: 127.0.0.1:15037) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Ack (0.03s)
=== RUN   TestEvalEndpoint_Nack
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15039.global 127.0.0.1
2016/04/14 12:57:59 [WARN] nomad: no enabled schedulers
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15039 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15039 updated peer set (2): [127.0.0.1:15039]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15039.global (Addr: 127.0.0.1:15039) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Nack (0.02s)
=== RUN   TestEvalEndpoint_Update
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15041.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15041 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15041 updated peer set (2): [127.0.0.1:15041]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15041.global (Addr: 127.0.0.1:15041) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Update (0.03s)
=== RUN   TestEvalEndpoint_Create
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15043.global 127.0.0.1
2016/04/14 12:57:59 [WARN] nomad: no enabled schedulers
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15043 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15043 updated peer set (2): [127.0.0.1:15043]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15043.global (Addr: 127.0.0.1:15043) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Create (0.09s)
=== RUN   TestEvalEndpoint_Reap
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15045.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15045 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15045 updated peer set (2): [127.0.0.1:15045]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15045.global (Addr: 127.0.0.1:15045) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Reap (0.02s)
=== RUN   TestEvalEndpoint_List
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15047.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15047 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15047 updated peer set (2): [127.0.0.1:15047]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15047.global (Addr: 127.0.0.1:15047) (DC: dc1)
2016/04/14 12:57:59 [INFO] nomad: shutting down server
2016/04/14 12:57:59 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_List (0.03s)
=== RUN   TestEvalEndpoint_List_Blocking
2016/04/14 12:57:59 [INFO] serf: EventMemberJoin: Node 15049.global 127.0.0.1
2016/04/14 12:57:59 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:57:59 [INFO] raft: Node at 127.0.0.1:15049 [Leader] entering Leader state
2016/04/14 12:57:59 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:57:59 [DEBUG] raft: Node 127.0.0.1:15049 updated peer set (2): [127.0.0.1:15049]
2016/04/14 12:57:59 [INFO] nomad: cluster leadership acquired
2016/04/14 12:57:59 [INFO] nomad: adding server Node 15049.global (Addr: 127.0.0.1:15049) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_List_Blocking (0.23s)
=== RUN   TestEvalEndpoint_Allocations
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15051.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15051 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15051.global (Addr: 127.0.0.1:15051) (DC: dc1)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15051 updated peer set (2): [127.0.0.1:15051]
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Allocations (0.03s)
=== RUN   TestEvalEndpoint_Allocations_Blocking
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15053.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15053 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15053 updated peer set (2): [127.0.0.1:15053]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15053.global (Addr: 127.0.0.1:15053) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestEvalEndpoint_Allocations_Blocking (0.23s)
=== RUN   TestFSM_UpsertNode
--- PASS: TestFSM_UpsertNode (0.00s)
=== RUN   TestFSM_DeregisterNode
--- PASS: TestFSM_DeregisterNode (0.00s)
=== RUN   TestFSM_UpdateNodeStatus
--- PASS: TestFSM_UpdateNodeStatus (0.01s)
=== RUN   TestFSM_UpdateNodeDrain
--- PASS: TestFSM_UpdateNodeDrain (0.00s)
=== RUN   TestFSM_RegisterJob
2016/04/14 12:58:00 [DEBUG] nomad.periodic: registered periodic job "6686effb-f5a4-7730-5f09-0a4c5f050467"
--- PASS: TestFSM_RegisterJob (0.01s)
=== RUN   TestFSM_DeregisterJob
2016/04/14 12:58:00 [DEBUG] nomad.periodic: launching job "6686effb-f5a4-7730-5f09-0a4c5f050467" in 1m59.70224943s
2016/04/14 12:58:00 [DEBUG] nomad.periodic: launching job "6686effb-f5a4-7730-5f09-0a4c5f050467" in 1m59.701846763s
2016/04/14 12:58:00 [DEBUG] nomad.periodic: registered periodic job "ff818fda-9414-0e00-6eee-c74e73c2b3aa"
2016/04/14 12:58:00 [DEBUG] nomad.periodic: deregistered periodic job "ff818fda-9414-0e00-6eee-c74e73c2b3aa"
--- PASS: TestFSM_DeregisterJob (0.01s)
=== RUN   TestFSM_UpdateEval
--- PASS: TestFSM_UpdateEval (0.00s)
=== RUN   TestFSM_UpdateEval_Blocked
--- PASS: TestFSM_UpdateEval_Blocked (0.00s)
=== RUN   TestFSM_DeleteEval
--- PASS: TestFSM_DeleteEval (0.00s)
=== RUN   TestFSM_UpsertAllocs
--- PASS: TestFSM_UpsertAllocs (0.01s)
=== RUN   TestFSM_UpsertAllocs_SharedJob
--- PASS: TestFSM_UpsertAllocs_SharedJob (0.01s)
=== RUN   TestFSM_UpsertAllocs_StrippedResources
--- PASS: TestFSM_UpsertAllocs_StrippedResources (0.00s)
=== RUN   TestFSM_UpdateAllocFromClient_Unblock
--- PASS: TestFSM_UpdateAllocFromClient_Unblock (0.02s)
=== RUN   TestFSM_UpdateAllocFromClient
2016/04/14 12:58:00 [ERR] nomad.fsm: looking up node "12345678-abcd-efab-cdef-123456789abc" failed: <nil>
--- PASS: TestFSM_UpdateAllocFromClient (0.04s)
=== RUN   TestFSM_SnapshotRestore_Nodes
--- PASS: TestFSM_SnapshotRestore_Nodes (0.05s)
=== RUN   TestFSM_SnapshotRestore_Jobs
--- PASS: TestFSM_SnapshotRestore_Jobs (0.03s)
=== RUN   TestFSM_SnapshotRestore_Evals
--- PASS: TestFSM_SnapshotRestore_Evals (0.01s)
=== RUN   TestFSM_SnapshotRestore_Allocs
--- PASS: TestFSM_SnapshotRestore_Allocs (0.03s)
=== RUN   TestFSM_SnapshotRestore_Indexes
--- PASS: TestFSM_SnapshotRestore_Indexes (0.00s)
=== RUN   TestFSM_SnapshotRestore_TimeTable
--- PASS: TestFSM_SnapshotRestore_TimeTable (0.00s)
=== RUN   TestFSM_SnapshotRestore_PeriodicLaunches
--- PASS: TestFSM_SnapshotRestore_PeriodicLaunches (0.01s)
=== RUN   TestInitializeHeartbeatTimers
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15055.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15055 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15055 updated peer set (2): [127.0.0.1:15055]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15055.global (Addr: 127.0.0.1:15055) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestInitializeHeartbeatTimers (0.02s)
=== RUN   TestResetHeartbeatTimer
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15057.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15057 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15057 updated peer set (2): [127.0.0.1:15057]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15057.global (Addr: 127.0.0.1:15057) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestResetHeartbeatTimer (0.02s)
=== RUN   TestResetHeartbeatTimerLocked
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15059.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15059 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15059 updated peer set (2): [127.0.0.1:15059]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15059.global (Addr: 127.0.0.1:15059) (DC: dc1)
2016/04/14 12:58:00 [WARN] nomad.heartbeat: node 'foo' TTL expired
2016/04/14 12:58:00 [ERR] nomad.heartbeat: update status failed: node lookup failed: index error: UUID must be 36 characters
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestResetHeartbeatTimerLocked (0.03s)
=== RUN   TestResetHeartbeatTimerLocked_Renew
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15061.global 127.0.0.1
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15061 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15061 updated peer set (2): [127.0.0.1:15061]
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15061.global (Addr: 127.0.0.1:15061) (DC: dc1)
2016/04/14 12:58:00 [WARN] nomad.heartbeat: node 'foo' TTL expired
2016/04/14 12:58:00 [ERR] nomad.heartbeat: update status failed: node lookup failed: index error: UUID must be 36 characters
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestResetHeartbeatTimerLocked_Renew (0.03s)
=== RUN   TestInvalidateHeartbeat
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15063.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15063 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15063 updated peer set (2): [127.0.0.1:15063]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15063.global (Addr: 127.0.0.1:15063) (DC: dc1)
2016/04/14 12:58:00 [WARN] nomad.heartbeat: node '4993f69b-a738-972d-45c3-671a1a553b1d' TTL expired
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestInvalidateHeartbeat (0.02s)
=== RUN   TestClearHeartbeatTimer
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15065.global 127.0.0.1
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15065 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15065 updated peer set (2): [127.0.0.1:15065]
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15065.global (Addr: 127.0.0.1:15065) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestClearHeartbeatTimer (0.02s)
=== RUN   TestClearAllHeartbeatTimers
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15067.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15067 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15067 updated peer set (2): [127.0.0.1:15067]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15067.global (Addr: 127.0.0.1:15067) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
--- PASS: TestClearAllHeartbeatTimers (0.02s)
=== RUN   TestServer_HeartbeatTTL_Failover
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15069.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15071.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15073.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:00 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15070
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15069 [Leader] entering Leader state
2016/04/14 12:58:00 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15069]
2016/04/14 12:58:00 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:00 [DEBUG] memberlist: TCP connection from=127.0.0.1:48065
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15071.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15069.global (Addr: 127.0.0.1:15069) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15071.global (Addr: 127.0.0.1:15071) (DC: dc1)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15071 127.0.0.1:15069]
2016/04/14 12:58:00 [INFO] raft: Added peer 127.0.0.1:15071, starting replication
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15071 [Follower] entering Follower state
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15071.global (Addr: 127.0.0.1:15071) (DC: dc1)
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15073 [Follower] entering Follower state
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15073.global (Addr: 127.0.0.1:15073) (DC: dc1)
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15069.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15069.global (Addr: 127.0.0.1:15069) (DC: dc1)
2016/04/14 12:58:00 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15070
2016/04/14 12:58:00 [DEBUG] memberlist: TCP connection from=127.0.0.1:48067
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15073.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15073.global (Addr: 127.0.0.1:15073) (DC: dc1)
2016/04/14 12:58:00 [DEBUG] raft-net: 127.0.0.1:15071 accepted connection from: 127.0.0.1:33844
2016/04/14 12:58:00 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15071.global 127.0.0.1
2016/04/14 12:58:00 [INFO] serf: EventMemberJoin: Node 15069.global 127.0.0.1
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15071.global (Addr: 127.0.0.1:15071) (DC: dc1)
2016/04/14 12:58:00 [INFO] nomad: adding server Node 15069.global (Addr: 127.0.0.1:15069) (DC: dc1)
2016/04/14 12:58:00 [WARN] raft: AppendEntries to 127.0.0.1:15071 rejected, sending older logs (next: 1)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15071 updated peer set (2): [127.0.0.1:15069]
2016/04/14 12:58:00 [INFO] raft: pipelining replication to peer 127.0.0.1:15071
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15071 127.0.0.1:15069]
2016/04/14 12:58:00 [INFO] nomad: added raft peer: Node 15071.global (Addr: 127.0.0.1:15071) (DC: dc1)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069 127.0.0.1:15071]
2016/04/14 12:58:00 [INFO] raft: Added peer 127.0.0.1:15073, starting replication
2016/04/14 12:58:00 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60542
2016/04/14 12:58:00 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15071 updated peer set (2): [127.0.0.1:15071 127.0.0.1:15069]
2016/04/14 12:58:00 [DEBUG] raft-net: 127.0.0.1:15071 accepted connection from: 127.0.0.1:33847
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15069 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069 127.0.0.1:15071]
2016/04/14 12:58:00 [INFO] nomad: added raft peer: Node 15073.global (Addr: 127.0.0.1:15073) (DC: dc1)
2016/04/14 12:58:00 [WARN] raft: AppendEntries to 127.0.0.1:15073 rejected, sending older logs (next: 1)
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15073 updated peer set (2): [127.0.0.1:15069]
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15073 updated peer set (2): [127.0.0.1:15071 127.0.0.1:15069]
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15073 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069 127.0.0.1:15071]
2016/04/14 12:58:00 [INFO] raft: pipelining replication to peer 127.0.0.1:15073
2016/04/14 12:58:00 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60545
2016/04/14 12:58:00 [DEBUG] raft: Node 127.0.0.1:15071 updated peer set (2): [127.0.0.1:15073 127.0.0.1:15069 127.0.0.1:15071]
2016/04/14 12:58:00 [INFO] nomad: shutting down server
2016/04/14 12:58:00 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:00 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15073
2016/04/14 12:58:00 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15071
2016/04/14 12:58:00 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:00 [INFO] raft: Node at 127.0.0.1:15071 [Candidate] entering Candidate state
2016/04/14 12:58:00 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:00 [DEBUG] raft: Vote granted from 127.0.0.1:15071. Tally: 1
2016/04/14 12:58:00 [DEBUG] raft-net: 127.0.0.1:15073 accepted connection from: 127.0.0.1:60547
2016/04/14 12:58:00 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:00 [WARN] raft: Rejecting vote request from 127.0.0.1:15071 since we have a leader: 127.0.0.1:15069
2016/04/14 12:58:00 [DEBUG] memberlist: Failed UDP ping: Node 15069.global (timeout reached)
2016/04/14 12:58:01 [DEBUG] memberlist: Failed UDP ping: Node 15069.global (timeout reached)
2016/04/14 12:58:01 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15073 [Candidate] entering Candidate state
2016/04/14 12:58:01 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:01 [DEBUG] raft: Vote granted from 127.0.0.1:15073. Tally: 1
2016/04/14 12:58:01 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:01 [DEBUG] raft-net: 127.0.0.1:15071 accepted connection from: 127.0.0.1:33854
2016/04/14 12:58:01 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:01 [INFO] memberlist: Suspect Node 15069.global has failed, no acks received
2016/04/14 12:58:01 [INFO] memberlist: Suspect Node 15069.global has failed, no acks received
2016/04/14 12:58:01 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15071 [Candidate] entering Candidate state
2016/04/14 12:58:01 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:01 [DEBUG] raft: Vote granted from 127.0.0.1:15071. Tally: 1
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15073 [Follower] entering Follower state
2016/04/14 12:58:01 [DEBUG] raft: Vote granted from 127.0.0.1:15073. Tally: 2
2016/04/14 12:58:01 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15071 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:01 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:01 [INFO] raft: pipelining replication to peer 127.0.0.1:15073
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15071 updated peer set (2): [127.0.0.1:15071 127.0.0.1:15073 127.0.0.1:15069]
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:01 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15073
2016/04/14 12:58:01 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:01 [ERR] raft: Failed to heartbeat to 127.0.0.1:15069: dial tcp 127.0.0.1:15069: getsockopt: connection refused
2016/04/14 12:58:01 [INFO] memberlist: Marking Node 15069.global as failed, suspect timeout reached
2016/04/14 12:58:01 [INFO] serf: EventMemberFailed: Node 15069.global 127.0.0.1
2016/04/14 12:58:01 [INFO] memberlist: Marking Node 15069.global as failed, suspect timeout reached
2016/04/14 12:58:01 [INFO] serf: EventMemberFailed: Node 15069.global 127.0.0.1
2016/04/14 12:58:01 [ERR] raft: Failed to heartbeat to 127.0.0.1:15073: read tcp 127.0.0.1:60555->127.0.0.1:15073: i/o timeout
2016/04/14 12:58:01 [INFO] nomad: shutting down server
--- PASS: TestServer_HeartbeatTTL_Failover (0.88s)
=== RUN   TestJobEndpoint_Register
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15075.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15075 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15075 updated peer set (2): [127.0.0.1:15075]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15075.global (Addr: 127.0.0.1:15075) (DC: dc1)
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Register (0.03s)
=== RUN   TestJobEndpoint_Register_Existing
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15077.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15077 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15077 updated peer set (2): [127.0.0.1:15077]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15077.global (Addr: 127.0.0.1:15077) (DC: dc1)
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Register_Existing (0.04s)
=== RUN   TestJobEndpoint_Register_Batch
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15079.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15079 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15079 updated peer set (2): [127.0.0.1:15079]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15079.global (Addr: 127.0.0.1:15079) (DC: dc1)
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Register_Batch (0.04s)
=== RUN   TestJobEndpoint_Register_GC_Set
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15081.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15081 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15081 updated peer set (2): [127.0.0.1:15081]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15081.global (Addr: 127.0.0.1:15081) (DC: dc1)
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Register_GC_Set (0.03s)
=== RUN   TestJobEndpoint_Register_Periodic
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15083.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15083 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15083 updated peer set (2): [127.0.0.1:15083]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15083.global (Addr: 127.0.0.1:15083) (DC: dc1)
2016/04/14 12:58:01 [DEBUG] nomad.periodic: registered periodic job "56966329-6a08-4f4f-2f6d-4f1e5440fe0c"
2016/04/14 12:58:01 [DEBUG] nomad.periodic: launching job "56966329-6a08-4f4f-2f6d-4f1e5440fe0c" in 1m58.227368097s
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Register_Periodic (0.03s)
=== RUN   TestJobEndpoint_Evaluate
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15085.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15085 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15085 updated peer set (2): [127.0.0.1:15085]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15085.global (Addr: 127.0.0.1:15085) (DC: dc1)
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Evaluate (0.04s)
=== RUN   TestJobEndpoint_Evaluate_Periodic
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15087.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15087 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15087 updated peer set (2): [127.0.0.1:15087]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15087.global (Addr: 127.0.0.1:15087) (DC: dc1)
2016/04/14 12:58:01 [DEBUG] nomad.periodic: registered periodic job "473b7a12-4501-7e31-6df9-e9bca8c76c80"
2016/04/14 12:58:01 [DEBUG] nomad.periodic: launching job "473b7a12-4501-7e31-6df9-e9bca8c76c80" in 1m58.157450097s
2016/04/14 12:58:01 [INFO] nomad: shutting down server
2016/04/14 12:58:01 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Evaluate_Periodic (0.03s)
=== RUN   TestJobEndpoint_Deregister
2016/04/14 12:58:01 [INFO] serf: EventMemberJoin: Node 15089.global 127.0.0.1
2016/04/14 12:58:01 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:01 [INFO] raft: Node at 127.0.0.1:15089 [Leader] entering Leader state
2016/04/14 12:58:01 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:01 [DEBUG] raft: Node 127.0.0.1:15089 updated peer set (2): [127.0.0.1:15089]
2016/04/14 12:58:01 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:01 [INFO] nomad: adding server Node 15089.global (Addr: 127.0.0.1:15089) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Deregister (0.18s)
=== RUN   TestJobEndpoint_Deregister_Periodic
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15091.global 127.0.0.1
2016/04/14 12:58:02 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15091 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15091.global (Addr: 127.0.0.1:15091) (DC: dc1)
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15091 updated peer set (2): [127.0.0.1:15091]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [DEBUG] nomad.periodic: registered periodic job "9bfe2dea-2598-4651-c8be-621209f4a801"
2016/04/14 12:58:02 [DEBUG] nomad.periodic: launching job "9bfe2dea-2598-4651-c8be-621209f4a801" in 1m57.941454097s
2016/04/14 12:58:02 [DEBUG] nomad.periodic: deregistered periodic job "9bfe2dea-2598-4651-c8be-621209f4a801"
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Deregister_Periodic (0.04s)
=== RUN   TestJobEndpoint_GetJob
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15093.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15093 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15093 updated peer set (2): [127.0.0.1:15093]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15093.global (Addr: 127.0.0.1:15093) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_GetJob (0.04s)
=== RUN   TestJobEndpoint_GetJob_Blocking
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15095.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15095 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15095.global (Addr: 127.0.0.1:15095) (DC: dc1)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15095 updated peer set (2): [127.0.0.1:15095]
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_GetJob_Blocking (0.33s)
=== RUN   TestJobEndpoint_ListJobs
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15097.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15097 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15097 updated peer set (2): [127.0.0.1:15097]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15097.global (Addr: 127.0.0.1:15097) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_ListJobs (0.03s)
=== RUN   TestJobEndpoint_ListJobs_Blocking
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15099.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15099 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15099 updated peer set (2): [127.0.0.1:15099]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15099.global (Addr: 127.0.0.1:15099) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_ListJobs_Blocking (0.23s)
=== RUN   TestJobEndpoint_Allocations
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15101.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15101 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15101 updated peer set (2): [127.0.0.1:15101]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15101.global (Addr: 127.0.0.1:15101) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Allocations (0.03s)
=== RUN   TestJobEndpoint_Allocations_Blocking
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15103.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15103 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15103 updated peer set (2): [127.0.0.1:15103]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15103.global (Addr: 127.0.0.1:15103) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Allocations_Blocking (0.23s)
=== RUN   TestJobEndpoint_Evaluations
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15105.global 127.0.0.1
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15105 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15105 updated peer set (2): [127.0.0.1:15105]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15105.global (Addr: 127.0.0.1:15105) (DC: dc1)
2016/04/14 12:58:02 [INFO] nomad: shutting down server
2016/04/14 12:58:02 [WARN] serf: Shutdown without a Leave
--- PASS: TestJobEndpoint_Evaluations (0.03s)
=== RUN   TestLeader_LeftServer
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15107.global 127.0.0.1
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15107 [Leader] entering Leader state
2016/04/14 12:58:02 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:02 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:02 [INFO] serf: EventMemberJoin: Node 15109.global 127.0.0.1
2016/04/14 12:58:02 [DEBUG] raft: Node 127.0.0.1:15107 updated peer set (2): [127.0.0.1:15107]
2016/04/14 12:58:02 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:02 [INFO] nomad: adding server Node 15107.global (Addr: 127.0.0.1:15107) (DC: dc1)
2016/04/14 12:58:02 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/04/14 12:58:03 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15111.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15109.global (Addr: 127.0.0.1:15109) (DC: dc1)
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15111 [Follower] entering Follower state
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15111.global (Addr: 127.0.0.1:15111) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:46232
2016/04/14 12:58:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15108
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15109.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15109.global (Addr: 127.0.0.1:15109) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15107 updated peer set (2): [127.0.0.1:15109 127.0.0.1:15107]
2016/04/14 12:58:03 [INFO] raft: Added peer 127.0.0.1:15109, starting replication
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15109 accepted connection from: 127.0.0.1:37173
2016/04/14 12:58:03 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15107.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15107.global (Addr: 127.0.0.1:15107) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15108
2016/04/14 12:58:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:46234
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15111.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15111.global (Addr: 127.0.0.1:15111) (DC: dc1)
2016/04/14 12:58:03 [WARN] raft: AppendEntries to 127.0.0.1:15109 rejected, sending older logs (next: 1)
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15109 accepted connection from: 127.0.0.1:37175
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15107]
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15109.global 127.0.0.1
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15107.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15109.global (Addr: 127.0.0.1:15109) (DC: dc1)
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15107.global (Addr: 127.0.0.1:15107) (DC: dc1)
2016/04/14 12:58:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15109
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15107 updated peer set (2): [127.0.0.1:15109 127.0.0.1:15107]
2016/04/14 12:58:03 [INFO] nomad: added raft peer: Node 15109.global (Addr: 127.0.0.1:15109) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15107 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [INFO] raft: Added peer 127.0.0.1:15111, starting replication
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15111 accepted connection from: 127.0.0.1:45547
2016/04/14 12:58:03 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15109 127.0.0.1:15107]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15107 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [INFO] nomad: added raft peer: Node 15111.global (Addr: 127.0.0.1:15111) (DC: dc1)
2016/04/14 12:58:03 [WARN] raft: AppendEntries to 127.0.0.1:15111 rejected, sending older logs (next: 1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (2): [127.0.0.1:15107]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (2): [127.0.0.1:15109 127.0.0.1:15107]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15111
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15111 accepted connection from: 127.0.0.1:45548
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15111.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15111.global (Addr: 127.0.0.1:15111) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15111.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15109.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15109.global
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15109.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15111.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15109.global
2016/04/14 12:58:03 [INFO] nomad: shutting down server
2016/04/14 12:58:03 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15111
2016/04/14 12:58:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15109
2016/04/14 12:58:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/04/14 12:58:03 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15109. Tally: 1
2016/04/14 12:58:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15111 [Candidate] entering Candidate state
2016/04/14 12:58:03 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15111. Tally: 1
2016/04/14 12:58:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15109 accepted connection from: 127.0.0.1:37180
2016/04/14 12:58:03 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15111.global
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15111 accepted connection from: 127.0.0.1:45550
2016/04/14 12:58:03 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15107.global (timeout reached)
2016/04/14 12:58:03 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15111 [Candidate] entering Candidate state
2016/04/14 12:58:03 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15111. Tally: 1
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15107.global (timeout reached)
2016/04/14 12:58:03 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15109 [Candidate] entering Candidate state
2016/04/14 12:58:03 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15109. Tally: 1
2016/04/14 12:58:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] raft: Duplicate RequestVote for same term: 2
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15107.global has failed, no acks received
2016/04/14 12:58:03 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15111 [Candidate] entering Candidate state
2016/04/14 12:58:03 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15111. Tally: 1
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15107.global has failed, no acks received
2016/04/14 12:58:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15109 accepted connection from: 127.0.0.1:37185
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15109 [Follower] entering Follower state
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] raft: Vote granted from 127.0.0.1:15109. Tally: 2
2016/04/14 12:58:03 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15111 [Leader] entering Leader state
2016/04/14 12:58:03 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:03 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15109
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15107.global (timeout reached)
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15107.global (timeout reached)
2016/04/14 12:58:03 [WARN] raft: Failed to contact 127.0.0.1:15107 in 50.507334ms
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (2): [127.0.0.1:15111 127.0.0.1:15107 127.0.0.1:15109]
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15107.global has failed, no acks received
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15107.global has failed, no acks received
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [WARN] raft: Failed to contact 127.0.0.1:15107 in 99.079334ms
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [WARN] raft: Failed to contact 127.0.0.1:15107 in 138.909334ms
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] memberlist: Marking Node 15107.global as failed, suspect timeout reached
2016/04/14 12:58:03 [INFO] serf: EventMemberLeave: Node 15107.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: removing server Node 15107.global (Addr: 127.0.0.1:15107) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Failed to contact 127.0.0.1:15107 in 189.269ms
2016/04/14 12:58:03 [INFO] memberlist: Marking Node 15107.global as failed, suspect timeout reached
2016/04/14 12:58:03 [INFO] serf: EventMemberLeave: Node 15107.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: removing server Node 15107.global (Addr: 127.0.0.1:15107) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (3): [127.0.0.1:15111 127.0.0.1:15109]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15111 updated peer set (3): [127.0.0.1:15111 127.0.0.1:15109]
2016/04/14 12:58:03 [INFO] raft: Removed peer 127.0.0.1:15107, stopping replication (Index: 7)
2016/04/14 12:58:03 [INFO] nomad: removed server 'Node 15107.global' as peer
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15107.global (timeout reached)
2016/04/14 12:58:03 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [ERR] raft: Failed to heartbeat to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15107: dial tcp 127.0.0.1:15107: getsockopt: connection refused
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15107.global has failed, no acks received
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15109 updated peer set (3): [127.0.0.1:15111 127.0.0.1:15109]
2016/04/14 12:58:03 [DEBUG] serf: messageLeaveType: Node 15107.global
2016/04/14 12:58:03 [INFO] nomad: shutting down server
2016/04/14 12:58:03 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15109
2016/04/14 12:58:03 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:03 [INFO] nomad: shutting down server
2016/04/14 12:58:03 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:03 [INFO] nomad: shutting down server
--- PASS: TestLeader_LeftServer (0.83s)
=== RUN   TestLeader_LeftLeader
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15113.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15115.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15113 [Leader] entering Leader state
2016/04/14 12:58:03 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15113]
2016/04/14 12:58:03 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15115 [Follower] entering Follower state
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15115.global (Addr: 127.0.0.1:15115) (DC: dc1)
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15117 [Follower] entering Follower state
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15117.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15117.global (Addr: 127.0.0.1:15117) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15114
2016/04/14 12:58:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:41754
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15115.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15115.global (Addr: 127.0.0.1:15115) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15115 127.0.0.1:15113]
2016/04/14 12:58:03 [INFO] raft: Added peer 127.0.0.1:15115, starting replication
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15113.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15114
2016/04/14 12:58:03 [DEBUG] memberlist: TCP connection from=127.0.0.1:41756
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15117.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15117.global (Addr: 127.0.0.1:15117) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15115 accepted connection from: 127.0.0.1:41896
2016/04/14 12:58:03 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:03 [WARN] raft: AppendEntries to 127.0.0.1:15115 rejected, sending older logs (next: 1)
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15115.global 127.0.0.1
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15113.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15115.global (Addr: 127.0.0.1:15115) (DC: dc1)
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] memberlist: Failed UDP ping: Node 15109.global (timeout reached)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15115 updated peer set (2): [127.0.0.1:15113]
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15115 accepted connection from: 127.0.0.1:41898
2016/04/14 12:58:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15115
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15115 127.0.0.1:15113]
2016/04/14 12:58:03 [INFO] nomad: added raft peer: Node 15115.global (Addr: 127.0.0.1:15115) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15117 127.0.0.1:15113 127.0.0.1:15115]
2016/04/14 12:58:03 [INFO] raft: Added peer 127.0.0.1:15117, starting replication
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15117 accepted connection from: 127.0.0.1:57290
2016/04/14 12:58:03 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15115 updated peer set (2): [127.0.0.1:15115 127.0.0.1:15113]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (2): [127.0.0.1:15117 127.0.0.1:15113 127.0.0.1:15115]
2016/04/14 12:58:03 [INFO] nomad: added raft peer: Node 15117.global (Addr: 127.0.0.1:15117) (DC: dc1)
2016/04/14 12:58:03 [WARN] raft: AppendEntries to 127.0.0.1:15117 rejected, sending older logs (next: 1)
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15113]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15115 127.0.0.1:15113]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (2): [127.0.0.1:15117 127.0.0.1:15113 127.0.0.1:15115]
2016/04/14 12:58:03 [DEBUG] raft-net: 127.0.0.1:15117 accepted connection from: 127.0.0.1:57291
2016/04/14 12:58:03 [INFO] raft: pipelining replication to peer 127.0.0.1:15117
2016/04/14 12:58:03 [INFO] memberlist: Suspect Node 15109.global has failed, no acks received
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15115 updated peer set (2): [127.0.0.1:15117 127.0.0.1:15113 127.0.0.1:15115]
2016/04/14 12:58:03 [INFO] serf: EventMemberJoin: Node 15117.global 127.0.0.1
2016/04/14 12:58:03 [INFO] nomad: adding server Node 15117.global (Addr: 127.0.0.1:15117) (DC: dc1)
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:03 [INFO] nomad: server starting leave
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (3): [127.0.0.1:15117 127.0.0.1:15115]
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:03 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15113 updated peer set (3): [127.0.0.1:15117 127.0.0.1:15115]
2016/04/14 12:58:03 [INFO] raft: Removed peer 127.0.0.1:15115, stopping replication (Index: 5)
2016/04/14 12:58:03 [INFO] raft: Removed peer 127.0.0.1:15117, stopping replication (Index: 5)
2016/04/14 12:58:03 [INFO] raft: Removed ourself, transitioning to follower
2016/04/14 12:58:03 [INFO] raft: Node at 127.0.0.1:15113 [Follower] entering Follower state
2016/04/14 12:58:03 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15115
2016/04/14 12:58:03 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15117
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15115 updated peer set (3): [127.0.0.1:15117 127.0.0.1:15115]
2016/04/14 12:58:03 [DEBUG] raft: Node 127.0.0.1:15117 updated peer set (3): [127.0.0.1:15117 127.0.0.1:15115]
2016/04/14 12:58:04 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/04/14 12:58:04 [INFO] serf: EventMemberLeave: Node 15113.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: removing server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] serf: messageLeaveType: Node 15113.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:04 [INFO] serf: EventMemberLeave: Node 15113.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: removing server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] serf: messageLeaveType: Node 15113.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15115.global
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [DEBUG] serf: messageLeaveType: Node 15113.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:04 [DEBUG] serf: messageLeaveType: Node 15113.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15117.global
2016/04/14 12:58:04 [INFO] serf: EventMemberLeave: Node 15113.global 127.0.0.1
2016/04/14 12:58:04 [DEBUG] serf: messageLeaveType: Node 15113.global
2016/04/14 12:58:04 [INFO] nomad: removing server Node 15113.global (Addr: 127.0.0.1:15113) (DC: dc1)
2016/04/14 12:58:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15117 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15117. Tally: 1
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15115 accepted connection from: 127.0.0.1:41902
2016/04/14 12:58:04 [WARN] raft: Rejecting vote request from 127.0.0.1:15117 since we have a leader: 127.0.0.1:15113
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] nomad: shutting down server
--- PASS: TestLeader_LeftLeader (0.26s)
=== RUN   TestLeader_MultiBootstrap
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15119.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15121.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15120
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15119 [Leader] entering Leader state
2016/04/14 12:58:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15119 updated peer set (2): [127.0.0.1:15119]
2016/04/14 12:58:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:49385
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15121.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:04 [ERR] nomad: 'Node 15121.global' and 'Node 15119.global' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15119.global (Addr: 127.0.0.1:15119) (DC: dc1)
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15121.global (Addr: 127.0.0.1:15121) (DC: dc1)
2016/04/14 12:58:04 [ERR] nomad: 'Node 15121.global' and 'Node 15119.global' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15121 [Leader] entering Leader state
2016/04/14 12:58:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15121 updated peer set (2): [127.0.0.1:15121]
2016/04/14 12:58:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15121.global (Addr: 127.0.0.1:15121) (DC: dc1)
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15119.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15119.global (Addr: 127.0.0.1:15119) (DC: dc1)
2016/04/14 12:58:04 [ERR] nomad: 'Node 15119.global' and 'Node 15121.global' are both in bootstrap mode. Only one node should be in bootstrap mode, not adding Raft peer.
2016/04/14 12:58:04 [INFO] memberlist: Marking Node 15109.global as failed, suspect timeout reached
2016/04/14 12:58:04 [INFO] serf: EventMemberFailed: Node 15109.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_MultiBootstrap (0.06s)
=== RUN   TestLeader_PlanQueue_Reset
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15123.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15125.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15123 [Leader] entering Leader state
2016/04/14 12:58:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15123 updated peer set (2): [127.0.0.1:15123]
2016/04/14 12:58:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15123.global (Addr: 127.0.0.1:15123) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15125 [Follower] entering Follower state
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15125.global (Addr: 127.0.0.1:15125) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15127 [Follower] entering Follower state
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15127.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:04 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/04/14 12:58:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15124
2016/04/14 12:58:04 [WARN] raft: EnableSingleNode disabled, and no known peers. Aborting election.
2016/04/14 12:58:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:56260
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15125.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15125.global (Addr: 127.0.0.1:15125) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15123 updated peer set (2): [127.0.0.1:15125 127.0.0.1:15123]
2016/04/14 12:58:04 [INFO] raft: Added peer 127.0.0.1:15125, starting replication
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15127.global (Addr: 127.0.0.1:15127) (DC: dc1)
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15123.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15123.global (Addr: 127.0.0.1:15123) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15125 accepted connection from: 127.0.0.1:33391
2016/04/14 12:58:04 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15125 accepted connection from: 127.0.0.1:33393
2016/04/14 12:58:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15124
2016/04/14 12:58:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:56262
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15127.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15127.global (Addr: 127.0.0.1:15127) (DC: dc1)
2016/04/14 12:58:04 [WARN] raft: AppendEntries to 127.0.0.1:15125 rejected, sending older logs (next: 1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15123]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15125.global 127.0.0.1
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15123.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15125.global (Addr: 127.0.0.1:15125) (DC: dc1)
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15123.global (Addr: 127.0.0.1:15123) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: pipelining replication to peer 127.0.0.1:15125
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15123 updated peer set (2): [127.0.0.1:15125 127.0.0.1:15123]
2016/04/14 12:58:04 [INFO] nomad: added raft peer: Node 15125.global (Addr: 127.0.0.1:15125) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15123 updated peer set (2): [127.0.0.1:15127 127.0.0.1:15123 127.0.0.1:15125]
2016/04/14 12:58:04 [INFO] raft: Added peer 127.0.0.1:15127, starting replication
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15127 accepted connection from: 127.0.0.1:60997
2016/04/14 12:58:04 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15125 127.0.0.1:15123]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15123 updated peer set (2): [127.0.0.1:15127 127.0.0.1:15123 127.0.0.1:15125]
2016/04/14 12:58:04 [INFO] nomad: added raft peer: Node 15127.global (Addr: 127.0.0.1:15127) (DC: dc1)
2016/04/14 12:58:04 [WARN] raft: AppendEntries to 127.0.0.1:15127 rejected, sending older logs (next: 1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15127 updated peer set (2): [127.0.0.1:15123]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15127 updated peer set (2): [127.0.0.1:15125 127.0.0.1:15123]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15127 updated peer set (2): [127.0.0.1:15127 127.0.0.1:15123 127.0.0.1:15125]
2016/04/14 12:58:04 [INFO] raft: pipelining replication to peer 127.0.0.1:15127
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15127 accepted connection from: 127.0.0.1:60998
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15127.global 127.0.0.1
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15127.global (Addr: 127.0.0.1:15127) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15125.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15127.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15125.global
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15125 updated peer set (2): [127.0.0.1:15127 127.0.0.1:15123 127.0.0.1:15125]
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15127
2016/04/14 12:58:04 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15125
2016/04/14 12:58:04 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15125.global
2016/04/14 12:58:04 [DEBUG] memberlist: Failed UDP ping: Node 15123.global (timeout reached)
2016/04/14 12:58:04 [INFO] memberlist: Suspect Node 15123.global has failed, no acks received
2016/04/14 12:58:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15125 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15125. Tally: 1
2016/04/14 12:58:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15127 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15127. Tally: 1
2016/04/14 12:58:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15123: dial tcp 127.0.0.1:15123: getsockopt: connection refused
2016/04/14 12:58:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15123: dial tcp 127.0.0.1:15123: getsockopt: connection refused
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15127 accepted connection from: 127.0.0.1:32769
2016/04/14 12:58:04 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15125 accepted connection from: 127.0.0.1:33398
2016/04/14 12:58:04 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:04 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15127 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15127. Tally: 1
2016/04/14 12:58:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15123: dial tcp 127.0.0.1:15123: getsockopt: connection refused
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15125 [Follower] entering Follower state
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15125. Tally: 2
2016/04/14 12:58:04 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15127 [Leader] entering Leader state
2016/04/14 12:58:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:04 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15123: dial tcp 127.0.0.1:15123: getsockopt: connection refused
2016/04/14 12:58:04 [INFO] raft: pipelining replication to peer 127.0.0.1:15125
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15127 updated peer set (2): [127.0.0.1:15127 127.0.0.1:15123 127.0.0.1:15125]
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15125 accepted connection from: 127.0.0.1:33403
2016/04/14 12:58:04 [ERR] raft: Failed to heartbeat to 127.0.0.1:15123: dial tcp 127.0.0.1:15123: getsockopt: connection refused
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15125
2016/04/14 12:58:04 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] nomad: shutting down server
--- PASS: TestLeader_PlanQueue_Reset (0.61s)
=== RUN   TestLeader_EvalBroker_Reset
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15129 [Leader] entering Leader state
2016/04/14 12:58:04 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15129]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15129.global 127.0.0.1
2016/04/14 12:58:04 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15131.global 127.0.0.1
2016/04/14 12:58:04 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15133.global 127.0.0.1
2016/04/14 12:58:04 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15130
2016/04/14 12:58:04 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15129.global (Addr: 127.0.0.1:15129) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15131 [Follower] entering Follower state
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15131.global (Addr: 127.0.0.1:15131) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15133 [Follower] entering Follower state
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15133.global (Addr: 127.0.0.1:15133) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:59030
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15131.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15131.global (Addr: 127.0.0.1:15131) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15131 127.0.0.1:15129]
2016/04/14 12:58:04 [INFO] raft: Added peer 127.0.0.1:15131, starting replication
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15131 accepted connection from: 127.0.0.1:44031
2016/04/14 12:58:04 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15129.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15129.global (Addr: 127.0.0.1:15129) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15130
2016/04/14 12:58:04 [DEBUG] memberlist: TCP connection from=127.0.0.1:59032
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15133.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15133.global (Addr: 127.0.0.1:15133) (DC: dc1)
2016/04/14 12:58:04 [WARN] raft: AppendEntries to 127.0.0.1:15131 rejected, sending older logs (next: 1)
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15131 accepted connection from: 127.0.0.1:44033
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15131 updated peer set (2): [127.0.0.1:15129]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15131.global 127.0.0.1
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15129.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15131.global (Addr: 127.0.0.1:15131) (DC: dc1)
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15129.global (Addr: 127.0.0.1:15129) (DC: dc1)
2016/04/14 12:58:04 [INFO] raft: pipelining replication to peer 127.0.0.1:15131
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15131 127.0.0.1:15129]
2016/04/14 12:58:04 [INFO] nomad: added raft peer: Node 15131.global (Addr: 127.0.0.1:15131) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15133 127.0.0.1:15129 127.0.0.1:15131]
2016/04/14 12:58:04 [INFO] raft: Added peer 127.0.0.1:15133, starting replication
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15133 accepted connection from: 127.0.0.1:50804
2016/04/14 12:58:04 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15131 updated peer set (2): [127.0.0.1:15131 127.0.0.1:15129]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15129 updated peer set (2): [127.0.0.1:15133 127.0.0.1:15129 127.0.0.1:15131]
2016/04/14 12:58:04 [INFO] nomad: added raft peer: Node 15133.global (Addr: 127.0.0.1:15133) (DC: dc1)
2016/04/14 12:58:04 [WARN] raft: AppendEntries to 127.0.0.1:15133 rejected, sending older logs (next: 1)
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15129]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15131 127.0.0.1:15129]
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15133 updated peer set (2): [127.0.0.1:15133 127.0.0.1:15129 127.0.0.1:15131]
2016/04/14 12:58:04 [INFO] raft: pipelining replication to peer 127.0.0.1:15133
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15133 accepted connection from: 127.0.0.1:50805
2016/04/14 12:58:04 [INFO] memberlist: Marking Node 15123.global as failed, suspect timeout reached
2016/04/14 12:58:04 [INFO] serf: EventMemberFailed: Node 15123.global 127.0.0.1
2016/04/14 12:58:04 [INFO] memberlist: Marking Node 15123.global as failed, suspect timeout reached
2016/04/14 12:58:04 [INFO] serf: EventMemberFailed: Node 15123.global 127.0.0.1
2016/04/14 12:58:04 [DEBUG] raft: Node 127.0.0.1:15131 updated peer set (2): [127.0.0.1:15133 127.0.0.1:15129 127.0.0.1:15131]
2016/04/14 12:58:04 [INFO] serf: EventMemberJoin: Node 15133.global 127.0.0.1
2016/04/14 12:58:04 [INFO] nomad: adding server Node 15133.global (Addr: 127.0.0.1:15133) (DC: dc1)
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15133.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15131.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15131.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15133.global
2016/04/14 12:58:04 [INFO] nomad: shutting down server
2016/04/14 12:58:04 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:04 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15131
2016/04/14 12:58:04 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15133
2016/04/14 12:58:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15131 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15131. Tally: 1
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15133.global
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15133.global
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15133 accepted connection from: 127.0.0.1:50807
2016/04/14 12:58:04 [WARN] raft: Rejecting vote request from 127.0.0.1:15131 since we have a leader: 127.0.0.1:15129
2016/04/14 12:58:04 [DEBUG] serf: messageJoinType: Node 15131.global
2016/04/14 12:58:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:04 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:04 [INFO] raft: Node at 127.0.0.1:15133 [Candidate] entering Candidate state
2016/04/14 12:58:04 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:04 [DEBUG] raft: Vote granted from 127.0.0.1:15133. Tally: 1
2016/04/14 12:58:04 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:04 [DEBUG] raft-net: 127.0.0.1:15131 accepted connection from: 127.0.0.1:44038
2016/04/14 12:58:04 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:05 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15131 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15131. Tally: 1
2016/04/14 12:58:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15133 [Follower] entering Follower state
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15133. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15131 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15133
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15131 updated peer set (2): [127.0.0.1:15131 127.0.0.1:15133 127.0.0.1:15129]
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15133 accepted connection from: 127.0.0.1:50812
2016/04/14 12:58:05 [ERR] raft: Failed to heartbeat to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:05 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15129: dial tcp 127.0.0.1:15129: getsockopt: connection refused
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15133
2016/04/14 12:58:05 [ERR] raft: Failed to heartbeat to 127.0.0.1:15133: EOF
2016/04/14 12:58:05 [INFO] nomad: shutting down server
--- PASS: TestLeader_EvalBroker_Reset (0.31s)
=== RUN   TestLeader_PeriodicDispatcher_Restore_Adds
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15135.global 127.0.0.1
2016/04/14 12:58:05 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15135 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15135.global (Addr: 127.0.0.1:15135) (DC: dc1)
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15137 [Follower] entering Follower state
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15137.global 127.0.0.1
2016/04/14 12:58:05 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15139.global 127.0.0.1
2016/04/14 12:58:05 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:05 [DEBUG] memberlist: TCP connection from=127.0.0.1:50736
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Follower] entering Follower state
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15139.global (Addr: 127.0.0.1:15139) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15136
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15137.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] raft: Added peer 127.0.0.1:15137, starting replication
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15137 accepted connection from: 127.0.0.1:54141
2016/04/14 12:58:05 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15135.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15135.global (Addr: 127.0.0.1:15135) (DC: dc1)
2016/04/14 12:58:05 [WARN] raft: AppendEntries to 127.0.0.1:15137 rejected, sending older logs (next: 1)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15136
2016/04/14 12:58:05 [DEBUG] memberlist: TCP connection from=127.0.0.1:50738
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15139.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15139.global (Addr: 127.0.0.1:15139) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15137 accepted connection from: 127.0.0.1:54143
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15137.global 127.0.0.1
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15135.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15135.global (Addr: 127.0.0.1:15135) (DC: dc1)
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15137
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] nomad: added raft peer: Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15135 127.0.0.1:15137]
2016/04/14 12:58:05 [INFO] raft: Added peer 127.0.0.1:15139, starting replication
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32909
2016/04/14 12:58:05 [WARN] raft: Failed to get previous log: 3 log not found (last: 0)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [WARN] raft: AppendEntries to 127.0.0.1:15139 rejected, sending older logs (next: 1)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15135 127.0.0.1:15137]
2016/04/14 12:58:05 [INFO] nomad: added raft peer: Node 15139.global (Addr: 127.0.0.1:15139) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32910
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15139.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15139.global (Addr: 127.0.0.1:15139) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] memberlist: Failed UDP ping: Node 15137.global (timeout reached)
2016/04/14 12:58:05 [DEBUG] raft: Failed to contact 127.0.0.1:15137 in 190.713667ms
2016/04/14 12:58:05 [DEBUG] raft: Failed to contact 127.0.0.1:15139 in 190.601334ms
2016/04/14 12:58:05 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15135 [Follower] entering Follower state
2016/04/14 12:58:05 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:05 [DEBUG] memberlist: Failed UDP ping: Node 15139.global (timeout reached)
2016/04/14 12:58:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15137 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15137. Tally: 1
2016/04/14 12:58:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15139. Tally: 1
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [INFO] memberlist: Suspect Node 15137.global has failed, no acks received
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15137
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15135 accepted connection from: 127.0.0.1:43303
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15137 accepted connection from: 127.0.0.1:54148
2016/04/14 12:58:05 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:05 [ERR] raft-net: Failed to decode incoming command: read tcp 127.0.0.1:15139->127.0.0.1:32909: read: connection reset by peer
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15139.global
2016/04/14 12:58:05 [DEBUG] serf: messageJoinType: Node 15137.global
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15135 accepted connection from: 127.0.0.1:43304
2016/04/14 12:58:05 [INFO] raft: Duplicate RequestVote for same term: 1
2016/04/14 12:58:05 [WARN] raft: Remote peer 127.0.0.1:15137 does not have local node 127.0.0.1:15139 as a peer
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15135. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15137 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15135
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15135 127.0.0.1:15137]
2016/04/14 12:58:05 [INFO] raft: Added peer 127.0.0.1:15139, starting replication
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] raft: Removed peer 127.0.0.1:15139, stopping replication (Index: 5)
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32914
2016/04/14 12:58:05 [WARN] raft: Failed to get previous log: 6 log not found (last: 4)
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Follower] entering Follower state
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] raft: Added peer 127.0.0.1:15139, starting replication
2016/04/14 12:58:05 [WARN] raft: AppendEntries to 127.0.0.1:15139 rejected, sending older logs (next: 5)
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15135 accepted connection from: 127.0.0.1:43308
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15135 127.0.0.1:15137]
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32915
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32917
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15137 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [INFO] nomad: added raft peer: Node 15139.global (Addr: 127.0.0.1:15139) (DC: dc1)
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15139 accepted connection from: 127.0.0.1:32918
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15135 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] nomad.periodic: registered periodic job "f5c606a6-5efa-3449-ebd7-ccabeb7b8935"
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:05 [DEBUG] nomad.periodic: launching job "f5c606a6-5efa-3449-ebd7-ccabeb7b8935" in 1m54.554959096s
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15139
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15135
2016/04/14 12:58:05 [DEBUG] memberlist: Failed UDP ping: Node 15137.global (timeout reached)
2016/04/14 12:58:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15135 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15135. Tally: 1
2016/04/14 12:58:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/04/14 12:58:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15137: EOF
2016/04/14 12:58:05 [WARN] raft: Rejecting vote request from 127.0.0.1:15135 since we have a leader: 127.0.0.1:15137
2016/04/14 12:58:05 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15139. Tally: 1
2016/04/14 12:58:05 [INFO] raft: Duplicate RequestVote for same term: 2
2016/04/14 12:58:05 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/04/14 12:58:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15137: EOF
2016/04/14 12:58:05 [INFO] memberlist: Suspect Node 15137.global has failed, no acks received
2016/04/14 12:58:05 [INFO] memberlist: Marking Node 15137.global as failed, suspect timeout reached
2016/04/14 12:58:05 [INFO] serf: EventMemberFailed: Node 15137.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: removing server Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [INFO] memberlist: Marking Node 15137.global as failed, suspect timeout reached
2016/04/14 12:58:05 [INFO] serf: EventMemberFailed: Node 15137.global 127.0.0.1
2016/04/14 12:58:05 [INFO] nomad: removing server Node 15137.global (Addr: 127.0.0.1:15137) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] memberlist: Failed UDP ping: Node 15137.global (timeout reached)
2016/04/14 12:58:05 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Candidate] entering Candidate state
2016/04/14 12:58:05 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15139. Tally: 1
2016/04/14 12:58:05 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15137: dial tcp 127.0.0.1:15137: getsockopt: connection refused
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15135 [Follower] entering Follower state
2016/04/14 12:58:05 [DEBUG] raft: Vote granted from 127.0.0.1:15135. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Election won. Tally: 2
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15139 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [INFO] raft: pipelining replication to peer 127.0.0.1:15135
2016/04/14 12:58:05 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15137: dial tcp 127.0.0.1:15137: getsockopt: connection refused
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15139 updated peer set (2): [127.0.0.1:15139 127.0.0.1:15137 127.0.0.1:15135]
2016/04/14 12:58:05 [DEBUG] nomad.periodic: registered periodic job "f5c606a6-5efa-3449-ebd7-ccabeb7b8935"
2016/04/14 12:58:05 [DEBUG] nomad.periodic: launching job "f5c606a6-5efa-3449-ebd7-ccabeb7b8935" in 1m54.376637763s
2016/04/14 12:58:05 [DEBUG] nomad.periodic: launching job "f5c606a6-5efa-3449-ebd7-ccabeb7b8935" in 1m54.376411429s
2016/04/14 12:58:05 [ERR] raft: Failed to heartbeat to 127.0.0.1:15137: dial tcp 127.0.0.1:15137: getsockopt: connection refused
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:05 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15135
2016/04/14 12:58:05 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:05 [DEBUG] raft-net: 127.0.0.1:15135 accepted connection from: 127.0.0.1:43318
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [INFO] nomad: shutting down server
2016/04/14 12:58:05 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_PeriodicDispatcher_Restore_Adds (0.58s)
=== RUN   TestLeader_PeriodicDispatcher_Restore_NoEvals
2016/04/14 12:58:05 [INFO] serf: EventMemberJoin: Node 15141.global 127.0.0.1
2016/04/14 12:58:05 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:05 [INFO] raft: Node at 127.0.0.1:15141 [Leader] entering Leader state
2016/04/14 12:58:05 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:05 [DEBUG] raft: Node 127.0.0.1:15141 updated peer set (2): [127.0.0.1:15141]
2016/04/14 12:58:05 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:05 [INFO] nomad: adding server Node 15141.global (Addr: 127.0.0.1:15141) (DC: dc1)
2016/04/14 12:58:05 [DEBUG] nomad.periodic: registered periodic job "c11829bb-8955-0e12-4144-2e762d36beca"
2016/04/14 12:58:05 [INFO] memberlist: Suspect Node 15137.global has failed, no acks received
2016/04/14 12:58:08 [DEBUG] nomad.periodic: registered periodic job "c11829bb-8955-0e12-4144-2e762d36beca"
2016/04/14 12:58:08 [DEBUG] nomad.periodic: periodic job "c11829bb-8955-0e12-4144-2e762d36beca" force run during leadership establishment
2016/04/14 12:58:08 [INFO] nomad: shutting down server
2016/04/14 12:58:08 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_PeriodicDispatcher_Restore_NoEvals (3.03s)
=== RUN   TestLeader_PeriodicDispatcher_Restore_Evals
2016/04/14 12:58:08 [INFO] serf: EventMemberJoin: Node 15143.global 127.0.0.1
2016/04/14 12:58:08 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:08 [INFO] raft: Node at 127.0.0.1:15143 [Leader] entering Leader state
2016/04/14 12:58:08 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:08 [DEBUG] raft: Node 127.0.0.1:15143 updated peer set (2): [127.0.0.1:15143]
2016/04/14 12:58:08 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:08 [INFO] nomad: adding server Node 15143.global (Addr: 127.0.0.1:15143) (DC: dc1)
2016/04/14 12:58:08 [DEBUG] nomad.periodic: registered periodic job "a44ee814-a1e2-dca6-6b82-eeace5653992"
2016/04/14 12:58:11 [DEBUG] nomad.periodic: registered periodic job "a44ee814-a1e2-dca6-6b82-eeace5653992"
2016/04/14 12:58:11 [DEBUG] nomad.periodic: periodic job "a44ee814-a1e2-dca6-6b82-eeace5653992" force run during leadership establishment
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:11 [DEBUG] nomad.periodic: launching job "a44ee814-a1e2-dca6-6b82-eeace5653992" in 7.305026095s
2016/04/14 12:58:11 [DEBUG] nomad.periodic: launching job "a44ee814-a1e2-dca6-6b82-eeace5653992" in 7.304749429s
--- PASS: TestLeader_PeriodicDispatcher_Restore_Evals (3.03s)
=== RUN   TestLeader_PeriodicDispatch
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15145.global 127.0.0.1
2016/04/14 12:58:11 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15145 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15145 updated peer set (2): [127.0.0.1:15145]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15145.global (Addr: 127.0.0.1:15145) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_PeriodicDispatch (0.02s)
=== RUN   TestLeader_ReapFailedEval
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15147.global 127.0.0.1
2016/04/14 12:58:11 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15147 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15147 updated peer set (2): [127.0.0.1:15147]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15147.global (Addr: 127.0.0.1:15147) (DC: dc1)
2016/04/14 12:58:11 [WARN] nomad: eval <Eval 'a26669f9-caac-548f-f537-240a4746d5a0' JobID: '43e15f26-863a-a452-b4be-18c452c16124'> reached delivery limit, marking as failed
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_ReapFailedEval (0.04s)
=== RUN   TestLeader_ReapDuplicateEval
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15149.global 127.0.0.1
2016/04/14 12:58:11 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15149 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15149 updated peer set (2): [127.0.0.1:15149]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15149.global (Addr: 127.0.0.1:15149) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestLeader_ReapDuplicateEval (0.03s)
=== RUN   TestClientEndpoint_Register
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15151.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15151 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15151 updated peer set (2): [127.0.0.1:15151]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15151.global (Addr: 127.0.0.1:15151) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_Register (0.03s)
=== RUN   TestClientEndpoint_Deregister
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15153.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15153 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15153 updated peer set (2): [127.0.0.1:15153]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15153.global (Addr: 127.0.0.1:15153) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_Deregister (0.03s)
=== RUN   TestClientEndpoint_UpdateStatus
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15155.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15155 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15155 updated peer set (2): [127.0.0.1:15155]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15155.global (Addr: 127.0.0.1:15155) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_UpdateStatus (0.03s)
=== RUN   TestClientEndpoint_UpdateStatus_GetEvals
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15157.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15157 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15157 updated peer set (2): [127.0.0.1:15157]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15157.global (Addr: 127.0.0.1:15157) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_UpdateStatus_GetEvals (0.03s)
=== RUN   TestClientEndpoint_UpdateStatus_HeartbeatOnly
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15159.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15159 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15159 updated peer set (2): [127.0.0.1:15159]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15159.global (Addr: 127.0.0.1:15159) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_UpdateStatus_HeartbeatOnly (0.03s)
=== RUN   TestClientEndpoint_UpdateDrain
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15161.global 127.0.0.1
2016/04/14 12:58:11 [INFO] raft: Node at 127.0.0.1:15161 [Leader] entering Leader state
2016/04/14 12:58:11 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:11 [DEBUG] raft: Node 127.0.0.1:15161 updated peer set (2): [127.0.0.1:15161]
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:11 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:11 [INFO] nomad: adding server Node 15161.global (Addr: 127.0.0.1:15161) (DC: dc1)
2016/04/14 12:58:11 [INFO] nomad: shutting down server
2016/04/14 12:58:11 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_UpdateDrain (0.04s)
=== RUN   TestClientEndpoint_GetNode
2016/04/14 12:58:11 [INFO] serf: EventMemberJoin: Node 15163.global 127.0.0.1
2016/04/14 12:58:11 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15163 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15163 updated peer set (2): [127.0.0.1:15163]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15163.global (Addr: 127.0.0.1:15163) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_GetNode (0.03s)
=== RUN   TestClientEndpoint_GetNode_Blocking
2016/04/14 12:58:12 [INFO] serf: EventMemberJoin: Node 15165.global 127.0.0.1
2016/04/14 12:58:12 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15165 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15165 updated peer set (2): [127.0.0.1:15165]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15165.global (Addr: 127.0.0.1:15165) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:12 [INFO] nomad: cluster leadership lost
--- PASS: TestClientEndpoint_GetNode_Blocking (0.43s)
=== RUN   TestClientEndpoint_GetAllocs
2016/04/14 12:58:12 [INFO] serf: EventMemberJoin: Node 15167.global 127.0.0.1
2016/04/14 12:58:12 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15167 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15167 updated peer set (2): [127.0.0.1:15167]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15167.global (Addr: 127.0.0.1:15167) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_GetAllocs (0.03s)
=== RUN   TestClientEndpoint_GetClientAllocs
2016/04/14 12:58:12 [INFO] serf: EventMemberJoin: Node 15169.global 127.0.0.1
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15169 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15169 updated peer set (2): [127.0.0.1:15169]
2016/04/14 12:58:12 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15169.global (Addr: 127.0.0.1:15169) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_GetClientAllocs (0.03s)
=== RUN   TestClientEndpoint_GetClientAllocs_Blocking
2016/04/14 12:58:12 [INFO] serf: EventMemberJoin: Node 15171.global 127.0.0.1
2016/04/14 12:58:12 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15171 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15171 updated peer set (2): [127.0.0.1:15171]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15171.global (Addr: 127.0.0.1:15171) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_GetClientAllocs_Blocking (0.23s)
=== RUN   TestClientEndpoint_GetAllocs_Blocking
2016/04/14 12:58:12 [INFO] serf: EventMemberJoin: Node 15173.global 127.0.0.1
2016/04/14 12:58:12 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:12 [INFO] raft: Node at 127.0.0.1:15173 [Leader] entering Leader state
2016/04/14 12:58:12 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:12 [DEBUG] raft: Node 127.0.0.1:15173 updated peer set (2): [127.0.0.1:15173]
2016/04/14 12:58:12 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:12 [INFO] nomad: adding server Node 15173.global (Addr: 127.0.0.1:15173) (DC: dc1)
2016/04/14 12:58:12 [INFO] nomad: shutting down server
2016/04/14 12:58:12 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_GetAllocs_Blocking (0.24s)
=== RUN   TestClientEndpoint_UpdateAlloc
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15175.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15175 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15175 updated peer set (2): [127.0.0.1:15175]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15175.global (Addr: 127.0.0.1:15175) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_UpdateAlloc (0.31s)
=== RUN   TestClientEndpoint_BatchUpdate
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15177.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15177 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15177 updated peer set (2): [127.0.0.1:15177]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15177.global (Addr: 127.0.0.1:15177) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_BatchUpdate (0.04s)
=== RUN   TestClientEndpoint_CreateNodeEvals
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15179.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15179 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15179 updated peer set (2): [127.0.0.1:15179]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15179.global (Addr: 127.0.0.1:15179) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_CreateNodeEvals (0.02s)
=== RUN   TestClientEndpoint_Evaluate
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15181.global 127.0.0.1
2016/04/14 12:58:13 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15181 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15181 updated peer set (2): [127.0.0.1:15181]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15181.global (Addr: 127.0.0.1:15181) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_Evaluate (0.03s)
=== RUN   TestClientEndpoint_ListNodes
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15183.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15183 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15183 updated peer set (2): [127.0.0.1:15183]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15183.global (Addr: 127.0.0.1:15183) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestClientEndpoint_ListNodes (0.03s)
=== RUN   TestClientEndpoint_ListNodes_Blocking
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15185.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15185 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15185 updated peer set (2): [127.0.0.1:15185]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15185.global (Addr: 127.0.0.1:15185) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:13 [INFO] nomad: cluster leadership lost
--- PASS: TestClientEndpoint_ListNodes_Blocking (0.43s)
=== RUN   TestBatchFuture
--- PASS: TestBatchFuture (0.01s)
=== RUN   TestPeriodicEndpoint_Force
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15187.global 127.0.0.1
2016/04/14 12:58:13 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15187 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15187 updated peer set (2): [127.0.0.1:15187]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15187.global (Addr: 127.0.0.1:15187) (DC: dc1)
2016/04/14 12:58:13 [DEBUG] nomad.periodic: registered periodic job "7ac1f0c4-38d3-178c-c42d-8acdc995bf2d"
2016/04/14 12:58:13 [DEBUG] nomad.periodic: launching job "7ac1f0c4-38d3-178c-c42d-8acdc995bf2d" in 1m46.100452762s
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestPeriodicEndpoint_Force (0.03s)
=== RUN   TestPeriodicEndpoint_Force_NonPeriodic
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15189.global 127.0.0.1
2016/04/14 12:58:13 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15189 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15189 updated peer set (2): [127.0.0.1:15189]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15189.global (Addr: 127.0.0.1:15189) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestPeriodicEndpoint_Force_NonPeriodic (0.02s)
=== RUN   TestPeriodicDispatch_Add_NonPeriodic
=== RUN   TestPeriodicDispatch_Add_UpdateJob
=== RUN   TestPeriodicDispatch_Add_RemoveJob
=== RUN   TestPeriodicDispatch_Add_TriggersUpdate
=== RUN   TestPeriodicDispatch_Remove_Untracked
=== RUN   TestPeriodicDispatch_Remove_Tracked
=== RUN   TestPeriodicDispatch_Remove_TriggersUpdate
=== RUN   TestPeriodicDispatch_ForceRun_Untracked
=== RUN   TestPeriodicDispatch_ForceRun_Tracked
=== RUN   TestPeriodicDispatch_Run_DisallowOverlaps
=== RUN   TestPeriodicDispatch_Run_Multiple
=== RUN   TestPeriodicDispatch_Run_SameTime
=== RUN   TestPeriodicDispatch_Complex
=== RUN   TestPeriodicHeap_Order
=== RUN   TestPeriodicDispatch_RunningChildren_NoEvals
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15191.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15191 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15191 updated peer set (2): [127.0.0.1:15191]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15191.global (Addr: 127.0.0.1:15191) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestPeriodicDispatch_RunningChildren_NoEvals (0.02s)
=== RUN   TestPeriodicDispatch_RunningChildren_ActiveEvals
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15193.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15193 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15193 updated peer set (2): [127.0.0.1:15193]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15193.global (Addr: 127.0.0.1:15193) (DC: dc1)
2016/04/14 12:58:13 [INFO] nomad: shutting down server
2016/04/14 12:58:13 [WARN] serf: Shutdown without a Leave
--- PASS: TestPeriodicDispatch_RunningChildren_ActiveEvals (0.02s)
=== RUN   TestPeriodicDispatch_RunningChildren_ActiveAllocs
2016/04/14 12:58:13 [INFO] serf: EventMemberJoin: Node 15195.global 127.0.0.1
2016/04/14 12:58:13 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:13 [INFO] raft: Node at 127.0.0.1:15195 [Leader] entering Leader state
2016/04/14 12:58:13 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:13 [DEBUG] raft: Node 127.0.0.1:15195 updated peer set (2): [127.0.0.1:15195]
2016/04/14 12:58:13 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:13 [INFO] nomad: adding server Node 15195.global (Addr: 127.0.0.1:15195) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestPeriodicDispatch_RunningChildren_ActiveAllocs (0.03s)
=== RUN   TestEvaluatePool
--- PASS: TestEvaluatePool (0.00s)
=== RUN   TestEvaluatePool_Resize
--- PASS: TestEvaluatePool_Resize (0.00s)
=== RUN   TestPlanApply_applyPlan
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15197.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15197 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15197 updated peer set (2): [127.0.0.1:15197]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15197.global (Addr: 127.0.0.1:15197) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestPlanApply_applyPlan (0.03s)
=== RUN   TestPlanApply_EvalPlan_Simple
--- PASS: TestPlanApply_EvalPlan_Simple (0.00s)
=== RUN   TestPlanApply_EvalPlan_Partial
--- PASS: TestPlanApply_EvalPlan_Partial (0.00s)
=== RUN   TestPlanApply_EvalPlan_Partial_AllAtOnce
--- PASS: TestPlanApply_EvalPlan_Partial_AllAtOnce (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_Simple
--- PASS: TestPlanApply_EvalNodePlan_Simple (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeNotReady
--- PASS: TestPlanApply_EvalNodePlan_NodeNotReady (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeDrain
--- PASS: TestPlanApply_EvalNodePlan_NodeDrain (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeNotExist
--- PASS: TestPlanApply_EvalNodePlan_NodeNotExist (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeFull
--- PASS: TestPlanApply_EvalNodePlan_NodeFull (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_UpdateExisting
--- PASS: TestPlanApply_EvalNodePlan_UpdateExisting (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeFull_Evict
--- PASS: TestPlanApply_EvalNodePlan_NodeFull_Evict (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeFull_AllocEvict
--- PASS: TestPlanApply_EvalNodePlan_NodeFull_AllocEvict (0.00s)
=== RUN   TestPlanApply_EvalNodePlan_NodeDown_EvictOnly
--- PASS: TestPlanApply_EvalNodePlan_NodeDown_EvictOnly (0.00s)
=== RUN   TestPlanEndpoint_Submit
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15199.global 127.0.0.1
2016/04/14 12:58:14 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15199 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15199 updated peer set (2): [127.0.0.1:15199]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15199.global (Addr: 127.0.0.1:15199) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestPlanEndpoint_Submit (0.03s)
=== RUN   TestPlanQueue_Enqueue_Dequeue
--- PASS: TestPlanQueue_Enqueue_Dequeue (0.00s)
=== RUN   TestPlanQueue_Enqueue_Disable
--- PASS: TestPlanQueue_Enqueue_Disable (0.00s)
=== RUN   TestPlanQueue_Dequeue_Timeout
--- PASS: TestPlanQueue_Dequeue_Timeout (0.01s)
=== RUN   TestPlanQueue_Dequeue_Priority
--- PASS: TestPlanQueue_Dequeue_Priority (0.00s)
=== RUN   TestPlanQueue_Dequeue_FIFO
--- PASS: TestPlanQueue_Dequeue_FIFO (0.21s)
=== RUN   TestRegionList
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15201.region1 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15203.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15204
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15201 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15201 updated peer set (2): [127.0.0.1:15201]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15201.region1 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15203 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15203 updated peer set (2): [127.0.0.1:15203]
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:59874
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15201.region1 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15203.region2 (Addr: 127.0.0.1:15203) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15201.region1 (Addr: 127.0.0.1:15201) (DC: dc1)
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15203.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15203.region2 (Addr: 127.0.0.1:15203) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestRegionList (0.05s)
=== RUN   TestRPC_forwardLeader
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15205.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15207.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [batch system noop service _core]
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15205 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15205]
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:54446
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15205.global (Addr: 127.0.0.1:15205) (DC: dc1)
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15207 [Follower] entering Follower state
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15207.global (Addr: 127.0.0.1:15207) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15206
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15207.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15207.global (Addr: 127.0.0.1:15207) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15207 127.0.0.1:15205]
2016/04/14 12:58:14 [INFO] raft: Added peer 127.0.0.1:15207, starting replication
2016/04/14 12:58:14 [DEBUG] raft-net: 127.0.0.1:15207 accepted connection from: 127.0.0.1:49888
2016/04/14 12:58:14 [WARN] raft: Failed to get previous log: 2 log not found (last: 0)
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15205.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15205.global (Addr: 127.0.0.1:15205) (DC: dc1)
2016/04/14 12:58:14 [WARN] raft: AppendEntries to 127.0.0.1:15207 rejected, sending older logs (next: 1)
2016/04/14 12:58:14 [DEBUG] raft-net: 127.0.0.1:15207 accepted connection from: 127.0.0.1:49889
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15207 updated peer set (2): [127.0.0.1:15205]
2016/04/14 12:58:14 [INFO] raft: pipelining replication to peer 127.0.0.1:15207
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15205 updated peer set (2): [127.0.0.1:15207 127.0.0.1:15205]
2016/04/14 12:58:14 [INFO] nomad: added raft peer: Node 15207.global (Addr: 127.0.0.1:15207) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/04/14 12:58:14 [ERR] raft: Failed to heartbeat to 127.0.0.1:15207: EOF
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] raft: aborting pipeline replication to peer 127.0.0.1:15207
2016/04/14 12:58:14 [INFO] nomad: cluster leadership lost
--- PASS: TestRPC_forwardLeader (0.09s)
=== RUN   TestRPC_forwardRegion
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15209.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15211.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15209 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15209 updated peer set (2): [127.0.0.1:15209]
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:36794
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15209.global (Addr: 127.0.0.1:15209) (DC: dc1)
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15211 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15211 updated peer set (2): [127.0.0.1:15211]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15211.region2 (Addr: 127.0.0.1:15211) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15210
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15211.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15211.region2 (Addr: 127.0.0.1:15211) (DC: dc1)
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15209.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15209.global (Addr: 127.0.0.1:15209) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] nomad: cluster leadership lost
--- PASS: TestRPC_forwardRegion (0.08s)
=== RUN   TestNomad_JoinPeer
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15213.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15215.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15213 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15213 updated peer set (2): [127.0.0.1:15213]
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:50949
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15213.global (Addr: 127.0.0.1:15213) (DC: dc1)
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15215 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15215 updated peer set (2): [127.0.0.1:15215]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15215.region2 (Addr: 127.0.0.1:15215) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15214
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15215.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15215.region2 (Addr: 127.0.0.1:15215) (DC: dc1)
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15213.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15213.global (Addr: 127.0.0.1:15213) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
--- PASS: TestNomad_JoinPeer (0.06s)
=== RUN   TestNomad_RemovePeer
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15217.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15219.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:14 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15218
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15217 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15217 updated peer set (2): [127.0.0.1:15217]
2016/04/14 12:58:14 [DEBUG] memberlist: TCP connection from=127.0.0.1:46139
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15219.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15217.global (Addr: 127.0.0.1:15217) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15219.region2 (Addr: 127.0.0.1:15219) (DC: dc1)
2016/04/14 12:58:14 [INFO] raft: Node at 127.0.0.1:15219 [Leader] entering Leader state
2016/04/14 12:58:14 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:14 [DEBUG] raft: Node 127.0.0.1:15219 updated peer set (2): [127.0.0.1:15219]
2016/04/14 12:58:14 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15219.region2 (Addr: 127.0.0.1:15219) (DC: dc1)
2016/04/14 12:58:14 [INFO] serf: EventMemberJoin: Node 15217.global 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: adding server Node 15217.global (Addr: 127.0.0.1:15217) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: server starting leave
2016/04/14 12:58:14 [DEBUG] serf: messageLeaveType: Node 15219.region2
2016/04/14 12:58:14 [DEBUG] serf: messageJoinType: Node 15219.region2
2016/04/14 12:58:14 [DEBUG] serf: messageLeaveType: Node 15219.region2
2016/04/14 12:58:14 [DEBUG] serf: messageJoinType: Node 15219.region2
2016/04/14 12:58:14 [INFO] serf: EventMemberLeave: Node 15219.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: removing server Node 15219.region2 (Addr: 127.0.0.1:15219) (DC: dc1)
2016/04/14 12:58:14 [DEBUG] serf: messageLeaveType: Node 15219.region2
2016/04/14 12:58:14 [INFO] serf: EventMemberLeave: Node 15219.region2 127.0.0.1
2016/04/14 12:58:14 [INFO] nomad: removing server Node 15219.region2 (Addr: 127.0.0.1:15219) (DC: dc1)
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [INFO] nomad: shutting down server
2016/04/14 12:58:14 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:14 [INFO] nomad: cluster leadership lost
--- PASS: TestNomad_RemovePeer (0.23s)
=== RUN   TestNomad_BootstrapExpect
2016/04/14 12:58:15 [INFO] raft: Node at 127.0.0.1:15221 [Leader] entering Leader state
2016/04/14 12:58:15 [INFO] serf: EventMemberJoin: Node 15221.global 127.0.0.1
2016/04/14 12:58:15 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:15 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:15 [INFO] nomad: adding server Node 15221.global (Addr: 127.0.0.1:15221) (DC: dc1)
2016/04/14 12:58:15 [DEBUG] raft: Node 127.0.0.1:15221 updated peer set (2): [127.0.0.1:15221]
2016/04/14 12:58:16 [INFO] raft: Node at 127.0.0.1:15223 [Leader] entering Leader state
2016/04/14 12:58:16 [INFO] serf: EventMemberJoin: Node 15223.global 127.0.0.1
2016/04/14 12:58:16 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:16 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:16 [INFO] nomad: adding server Node 15223.global (Addr: 127.0.0.1:15223) (DC: dc1)
2016/04/14 12:58:16 [DEBUG] memberlist: TCP connection from=127.0.0.1:53326
2016/04/14 12:58:16 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15222
2016/04/14 12:58:16 [INFO] serf: EventMemberJoin: Node 15223.global 127.0.0.1
2016/04/14 12:58:16 [INFO] nomad: adding server Node 15223.global (Addr: 127.0.0.1:15223) (DC: dc1)
2016/04/14 12:58:16 [DEBUG] raft: Node 127.0.0.1:15221 updated peer set (2): [127.0.0.1:15223 127.0.0.1:15221]
2016/04/14 12:58:16 [INFO] raft: Added peer 127.0.0.1:15223, starting replication
2016/04/14 12:58:16 [INFO] serf: EventMemberJoin: Node 15221.global 127.0.0.1
2016/04/14 12:58:16 [INFO] nomad: adding server Node 15221.global (Addr: 127.0.0.1:15221) (DC: dc1)
2016/04/14 12:58:16 [INFO] nomad: Attempting bootstrap with nodes: [127.0.0.1:15223 127.0.0.1:15221]
2016/04/14 12:58:16 [DEBUG] raft-net: 127.0.0.1:15223 accepted connection from: 127.0.0.1:57607
2016/04/14 12:58:16 [DEBUG] raft-net: 127.0.0.1:15223 accepted connection from: 127.0.0.1:57608
2016/04/14 12:58:16 [DEBUG] serf: messageJoinType: Node 15223.global
2016/04/14 12:58:16 [DEBUG] serf: messageJoinType: Node 15223.global
2016/04/14 12:58:16 [DEBUG] serf: messageJoinType: Node 15223.global
2016/04/14 12:58:16 [DEBUG] serf: messageJoinType: Node 15223.global
2016/04/14 12:58:16 [DEBUG] raft: Failed to contact 127.0.0.1:15223 in 309.231667ms
2016/04/14 12:58:16 [WARN] raft: Failed to contact quorum of nodes, stepping down
2016/04/14 12:58:16 [INFO] raft: Node at 127.0.0.1:15221 [Follower] entering Follower state
2016/04/14 12:58:16 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:16 [ERR] nomad: failed to add raft peer: leadership lost while committing log
2016/04/14 12:58:16 [ERR] nomad: failed to reconcile member: {Node 15223.global 127.0.0.1 15224 map[build:unittest vsn:1 region:global dc:dc1 vsn_min:1 port:15223 role:nomad vsn_max:1 expect:2] alive 1 3 2 2 4 4}: leadership lost while committing log
2016/04/14 12:58:16 [WARN] raft: Heartbeat timeout reached, starting election
2016/04/14 12:58:16 [INFO] raft: Node at 127.0.0.1:15221 [Candidate] entering Candidate state
2016/04/14 12:58:16 [ERR] raft: Failed to AppendEntries to 127.0.0.1:15223: read tcp 127.0.0.1:57607->127.0.0.1:15223: i/o timeout
2016/04/14 12:58:16 [ERR] raft: Failed to heartbeat to 127.0.0.1:15223: read tcp 127.0.0.1:57608->127.0.0.1:15223: i/o timeout
2016/04/14 12:58:17 [DEBUG] raft: Node 127.0.0.1:15223 updated peer set (2): [127.0.0.1:15223]
2016/04/14 12:58:17 [INFO] raft: Node at 127.0.0.1:15223 [Follower] entering Follower state
2016/04/14 12:58:17 [ERR] nomad: failed to wait for barrier: leadership lost while committing log
2016/04/14 12:58:17 [INFO] nomad: cluster leadership lost
2016/04/14 12:58:17 [INFO] nomad: shutting down server
2016/04/14 12:58:17 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:17 [DEBUG] raft-net: 127.0.0.1:15223 accepted connection from: 127.0.0.1:57609
2016/04/14 12:58:17 [DEBUG] memberlist: Failed UDP ping: Node 15221.global (timeout reached)
2016/04/14 12:58:17 [DEBUG] memberlist: TCP connection from=127.0.0.1:53330
2016/04/14 12:58:17 [DEBUG] memberlist: Failed UDP ping: Node 15223.global (timeout reached)
2016/04/14 12:58:17 [WARN] memberlist: Was able to reach Node 15221.global via TCP but not UDP, network may be misconfigured and not allowing bidirectional UDP
2016/04/14 12:58:17 [INFO] memberlist: Suspect Node 15223.global has failed, no acks received
2016/04/14 12:58:17 [ERR] raft-net: Failed to decode incoming command: transport shutdown
2016/04/14 12:58:17 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15223: EOF
2016/04/14 12:58:17 [DEBUG] memberlist: Failed UDP ping: Node 15223.global (timeout reached)
2016/04/14 12:58:17 [INFO] memberlist: Suspect Node 15223.global has failed, no acks received
2016/04/14 12:58:17 [INFO] memberlist: Marking Node 15223.global as failed, suspect timeout reached
2016/04/14 12:58:17 [INFO] serf: EventMemberFailed: Node 15223.global 127.0.0.1
2016/04/14 12:58:17 [INFO] nomad: removing server Node 15223.global (Addr: 127.0.0.1:15223) (DC: dc1)
2016/04/14 12:58:17 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:17 [DEBUG] raft: Vote granted from 127.0.0.1:15221. Tally: 1
2016/04/14 12:58:17 [DEBUG] memberlist: Failed UDP ping: Node 15223.global (timeout reached)
2016/04/14 12:58:17 [INFO] memberlist: Suspect Node 15223.global has failed, no acks received
2016/04/14 12:58:17 [WARN] raft: Election timeout reached, restarting election
2016/04/14 12:58:17 [INFO] raft: Node at 127.0.0.1:15221 [Candidate] entering Candidate state
2016/04/14 12:58:17 [INFO] nomad: shutting down server
2016/04/14 12:58:17 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:18 [DEBUG] raft: Votes needed: 2
2016/04/14 12:58:18 [ERR] raft: Failed to make RequestVote RPC to 127.0.0.1:15223: read tcp 127.0.0.1:57615->127.0.0.1:15223: i/o timeout
--- PASS: TestNomad_BootstrapExpect (3.46s)
=== RUN   TestNomad_BadExpect
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15225.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [noop service batch system _core]
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15227.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15226
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15225 [Follower] entering Follower state
2016/04/14 12:58:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:47391
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15227.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15225.global (Addr: 127.0.0.1:15225) (DC: dc1)
2016/04/14 12:58:18 [ERR] nomad: peer {Node 15227.global 127.0.0.1 15228 map[vsn_max:1 build:unittest port:15227 role:nomad expect:3 vsn_min:1 region:global dc:dc1 vsn:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15227.global (Addr: 127.0.0.1:15227) (DC: dc1)
2016/04/14 12:58:18 [ERR] nomad: peer {Node 15227.global 127.0.0.1 15228 map[region:global dc:dc1 vsn:1 vsn_min:1 build:unittest port:15227 role:nomad expect:3 vsn_max:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15227 [Follower] entering Follower state
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15227.global (Addr: 127.0.0.1:15227) (DC: dc1)
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15225.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15225.global (Addr: 127.0.0.1:15225) (DC: dc1)
2016/04/14 12:58:18 [ERR] nomad: peer {Node 15225.global 127.0.0.1 15226 map[region:global dc:dc1 vsn:1 vsn_min:1 build:unittest role:nomad port:15225 expect:2 vsn_max:1] alive 1 3 2 2 4 4} has a conflicting expect value. All nodes should expect the same number.
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestNomad_BadExpect (0.05s)
=== RUN   TestServer_RPC
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15229.global 127.0.0.1
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15229 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15229 updated peer set (2): [127.0.0.1:15229]
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [ERR] nomad: failed to wait for barrier: raft is already shutdown
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15229.global (Addr: 127.0.0.1:15229) (DC: dc1)
--- PASS: TestServer_RPC (0.01s)
=== RUN   TestServer_Regions
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15231.region1 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15233.region2 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:18 [DEBUG] memberlist: Initiating push/pull sync with: 127.0.0.1:15234
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15231 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15231 updated peer set (2): [127.0.0.1:15231]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15231.region1 (Addr: 127.0.0.1:15231) (DC: dc1)
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15233 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15233 updated peer set (2): [127.0.0.1:15233]
2016/04/14 12:58:18 [DEBUG] memberlist: TCP connection from=127.0.0.1:57630
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15231.region1 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15233.region2 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15231.region1 (Addr: 127.0.0.1:15231) (DC: dc1)
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15233.region2 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15233.region2 (Addr: 127.0.0.1:15233) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestServer_Regions (0.04s)
=== RUN   TestStatusVersion
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15235.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [system noop service batch _core]
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15235 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15235 updated peer set (2): [127.0.0.1:15235]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15235.global (Addr: 127.0.0.1:15235) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestStatusVersion (0.02s)
=== RUN   TestStatusPing
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15237.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15237 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15237 updated peer set (2): [127.0.0.1:15237]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15237.global (Addr: 127.0.0.1:15237) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestStatusPing (0.01s)
=== RUN   TestStatusLeader
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15239.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15239 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15239 updated peer set (2): [127.0.0.1:15239]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15239.global (Addr: 127.0.0.1:15239) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestStatusLeader (0.03s)
=== RUN   TestStatusPeers
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15241.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15241 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15241 updated peer set (2): [127.0.0.1:15241]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15241.global (Addr: 127.0.0.1:15241) (DC: dc1)
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestStatusPeers (0.02s)
=== RUN   TestSystemEndpoint_GarbageCollect
2016/04/14 12:58:18 [INFO] serf: EventMemberJoin: Node 15243.global 127.0.0.1
2016/04/14 12:58:18 [INFO] nomad: starting 4 scheduling worker(s) for [service batch system noop _core]
2016/04/14 12:58:18 [INFO] raft: Node at 127.0.0.1:15243 [Leader] entering Leader state
2016/04/14 12:58:18 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:18 [DEBUG] raft: Node 127.0.0.1:15243 updated peer set (2): [127.0.0.1:15243]
2016/04/14 12:58:18 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:18 [INFO] nomad: adding server Node 15243.global (Addr: 127.0.0.1:15243) (DC: dc1)
2016/04/14 12:58:18 [DEBUG] worker: dequeued evaluation 85a7cc4e-7ad9-25bb-5796-154f219ae0d6
2016/04/14 12:58:18 [DEBUG] sched.core: forced eval GC
2016/04/14 12:58:18 [DEBUG] sched.core: eval GC: scanning before index 18446744073709551615 (1h0m0s)
2016/04/14 12:58:18 [DEBUG] worker: ack for evaluation 85a7cc4e-7ad9-25bb-5796-154f219ae0d6
2016/04/14 12:58:18 [DEBUG] worker: dequeued evaluation 9439c406-77b0-c05a-eb91-20a46d0e9299
2016/04/14 12:58:18 [DEBUG] sched.core: forced job GC
2016/04/14 12:58:18 [DEBUG] sched.core: job GC: scanning before index 18446744073709551615 (4h0m0s)
2016/04/14 12:58:18 [DEBUG] sched.core: job GC: 1 jobs, 0 evaluations, 0 allocs eligible
2016/04/14 12:58:18 [DEBUG] worker: ack for evaluation 9439c406-77b0-c05a-eb91-20a46d0e9299
2016/04/14 12:58:18 [DEBUG] worker: dequeued evaluation 5b9cf3ef-2c98-9f03-2b7b-b58c042cafe0
2016/04/14 12:58:18 [DEBUG] sched.core: forced node GC
2016/04/14 12:58:18 [DEBUG] sched.core: node GC: scanning before index 18446744073709551615 (24h0m0s)
2016/04/14 12:58:18 [DEBUG] worker: ack for evaluation 5b9cf3ef-2c98-9f03-2b7b-b58c042cafe0
2016/04/14 12:58:18 [DEBUG] worker: dequeued evaluation 6a0d63b1-6baa-61b2-e8cb-c0f614031a07
2016/04/14 12:58:18 [DEBUG] sched: <Eval '6a0d63b1-6baa-61b2-e8cb-c0f614031a07' JobID: '24b75735-26cb-6735-2363-06d807bb5257'>: allocs: (place 0) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:18 [DEBUG] sched: <Eval '6a0d63b1-6baa-61b2-e8cb-c0f614031a07' JobID: '24b75735-26cb-6735-2363-06d807bb5257'>: setting status to complete
2016/04/14 12:58:18 [DEBUG] worker: updated evaluation <Eval '6a0d63b1-6baa-61b2-e8cb-c0f614031a07' JobID: '24b75735-26cb-6735-2363-06d807bb5257'>
2016/04/14 12:58:18 [DEBUG] worker: ack for evaluation 6a0d63b1-6baa-61b2-e8cb-c0f614031a07
2016/04/14 12:58:18 [INFO] nomad: shutting down server
2016/04/14 12:58:18 [WARN] serf: Shutdown without a Leave
--- PASS: TestSystemEndpoint_GarbageCollect (0.52s)
=== RUN   TestTimeTable
--- PASS: TestTimeTable (0.00s)
=== RUN   TestTimeTable_SerializeDeserialize
--- PASS: TestTimeTable_SerializeDeserialize (0.00s)
=== RUN   TestTimeTable_Overflow
--- PASS: TestTimeTable_Overflow (0.00s)
=== RUN   TestIsNomadServer
--- PASS: TestIsNomadServer (0.00s)
=== RUN   TestRandomStagger
--- PASS: TestRandomStagger (0.00s)
=== RUN   TestShuffleStrings
--- PASS: TestShuffleStrings (0.00s)
=== RUN   TestMaxUint64
--- PASS: TestMaxUint64 (0.00s)
=== RUN   TestRateScaledInterval
--- PASS: TestRateScaledInterval (0.00s)
=== RUN   TestWorker_dequeueEvaluation
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15245.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15245 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15245 updated peer set (2): [127.0.0.1:15245]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15245.global (Addr: 127.0.0.1:15245) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: dequeued evaluation 8f6cc315-41dd-5942-ae70-8a218b27ff2d
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_dequeueEvaluation (0.03s)
=== RUN   TestWorker_dequeueEvaluation_paused
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15247.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15247 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15247 updated peer set (2): [127.0.0.1:15247]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15247.global (Addr: 127.0.0.1:15247) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: dequeued evaluation d7ae6cd6-1aa8-d417-1045-6c2e97cd09ae
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_dequeueEvaluation_paused (0.13s)
=== RUN   TestWorker_dequeueEvaluation_shutdown
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15249.global 127.0.0.1
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15249 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15249 updated peer set (2): [127.0.0.1:15249]
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15249.global (Addr: 127.0.0.1:15249) (DC: dc1)
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:19 [INFO] nomad: shutting down server
--- PASS: TestWorker_dequeueEvaluation_shutdown (0.03s)
=== RUN   TestWorker_sendAck
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15251.global 127.0.0.1
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15251 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15251 updated peer set (2): [127.0.0.1:15251]
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15251.global (Addr: 127.0.0.1:15251) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: dequeued evaluation 0eeb8851-783f-13ea-1185-3e43968dd21f
2016/04/14 12:58:19 [DEBUG] worker: nack for evaluation 0eeb8851-783f-13ea-1185-3e43968dd21f
2016/04/14 12:58:19 [DEBUG] worker: dequeued evaluation 0eeb8851-783f-13ea-1185-3e43968dd21f
2016/04/14 12:58:19 [DEBUG] worker: ack for evaluation 0eeb8851-783f-13ea-1185-3e43968dd21f
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_sendAck (0.03s)
=== RUN   TestWorker_waitForIndex
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15253.global 127.0.0.1
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15253 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15253 updated peer set (2): [127.0.0.1:15253]
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15253.global (Addr: 127.0.0.1:15253) (DC: dc1)
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
2016/04/14 12:58:19 [INFO] nomad: cluster leadership lost
--- PASS: TestWorker_waitForIndex (0.06s)
=== RUN   TestWorker_invokeScheduler
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15255.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_invokeScheduler (0.01s)
=== RUN   TestWorker_SubmitPlan
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15257.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15257 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15257 updated peer set (2): [127.0.0.1:15257]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15257.global (Addr: 127.0.0.1:15257) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: submitted plan for evaluation 9b50ba43-1787-7f3c-adf3-c3387a3171bb
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_SubmitPlan (0.04s)
=== RUN   TestWorker_SubmitPlan_MissingNodeRefresh
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15259.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15259 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15259 updated peer set (2): [127.0.0.1:15259]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15259.global (Addr: 127.0.0.1:15259) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: submitted plan for evaluation 605da20e-6d58-a2f0-fe71-7f234fe688d3
2016/04/14 12:58:19 [DEBUG] worker: refreshing state to index 3
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_SubmitPlan_MissingNodeRefresh (0.03s)
=== RUN   TestWorker_UpdateEval
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15261.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15261 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15261 updated peer set (2): [127.0.0.1:15261]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15261.global (Addr: 127.0.0.1:15261) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: updated evaluation <Eval 'f790f06a-23a0-cda0-eb9a-a2a29313984b' JobID: 'f192b8f3-64c0-90fc-f27f-00649913aaed'>
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_UpdateEval (0.03s)
=== RUN   TestWorker_CreateEval
2016/04/14 12:58:19 [INFO] serf: EventMemberJoin: Node 15263.global 127.0.0.1
2016/04/14 12:58:19 [WARN] nomad: no enabled schedulers
2016/04/14 12:58:19 [INFO] raft: Node at 127.0.0.1:15263 [Leader] entering Leader state
2016/04/14 12:58:19 [INFO] raft: Disabling EnableSingleNode (bootstrap)
2016/04/14 12:58:19 [DEBUG] raft: Node 127.0.0.1:15263 updated peer set (2): [127.0.0.1:15263]
2016/04/14 12:58:19 [INFO] nomad: cluster leadership acquired
2016/04/14 12:58:19 [INFO] nomad: adding server Node 15263.global (Addr: 127.0.0.1:15263) (DC: dc1)
2016/04/14 12:58:19 [DEBUG] worker: created evaluation <Eval '1206192d-33f2-ee51-0ede-1730972ef3c9' JobID: 'fb02f83d-8454-9070-274d-5d9c05e900ec'>
2016/04/14 12:58:19 [INFO] nomad: shutting down server
2016/04/14 12:58:19 [WARN] serf: Shutdown without a Leave
--- PASS: TestWorker_CreateEval (0.03s)
--- PASS: TestPeriodicDispatch_Add_NonPeriodic (0.00s)
2016/04/14 12:58:19 [DEBUG] nomad.periodic: registered periodic job "78909690-2a75-981d-98e2-0c63c0f8d439"
2016/04/14 12:58:19 [DEBUG] nomad.periodic: updated periodic job "78909690-2a75-981d-98e2-0c63c0f8d439"
--- PASS: TestPeriodicDispatch_Add_UpdateJob (0.00s)
2016/04/14 12:58:19 [DEBUG] nomad.periodic: registered periodic job "d091b6f5-06ce-cfdd-912a-b4c56dd2393d"
2016/04/14 12:58:19 [DEBUG] nomad.periodic: deregistered periodic job "d091b6f5-06ce-cfdd-912a-b4c56dd2393d"
--- PASS: TestPeriodicDispatch_Add_RemoveJob (0.00s)
2016/04/14 12:58:19 [DEBUG] nomad.periodic: registered periodic job "07b45a84-8dd2-9604-7d23-9da0f0a358be"
2016/04/14 12:58:19 [DEBUG] nomad.periodic: updated periodic job "07b45a84-8dd2-9604-7d23-9da0f0a358be"
2016/04/14 12:58:19 [DEBUG] nomad.periodic: launching job "07b45a84-8dd2-9604-7d23-9da0f0a358be" in 561.926761ms
2016/04/14 12:58:19 [DEBUG] nomad.periodic: launching job "07b45a84-8dd2-9604-7d23-9da0f0a358be" in 561.683094ms
2016/04/14 12:58:20 [DEBUG] nomad.periodic: launching job 07b45a84-8dd2-9604-7d23-9da0f0a358be at 2016-04-14 12:58:20 +0000 UTC
--- PASS: TestPeriodicDispatch_Add_TriggersUpdate (2.00s)
--- PASS: TestPeriodicDispatch_Remove_Untracked (0.00s)
2016/04/14 12:58:21 [DEBUG] nomad.periodic: registered periodic job "d73c7f9a-d50c-2041-c86f-548da4df283e"
2016/04/14 12:58:21 [DEBUG] nomad.periodic: deregistered periodic job "d73c7f9a-d50c-2041-c86f-548da4df283e"
--- PASS: TestPeriodicDispatch_Remove_Tracked (0.00s)
2016/04/14 12:58:21 [DEBUG] nomad.periodic: registered periodic job "94bee94f-82d6-f9bc-9328-50b835b2c3c1"
2016/04/14 12:58:21 [DEBUG] nomad.periodic: deregistered periodic job "94bee94f-82d6-f9bc-9328-50b835b2c3c1"
--- PASS: TestPeriodicDispatch_Remove_TriggersUpdate (2.00s)
--- PASS: TestPeriodicDispatch_ForceRun_Untracked (0.00s)
2016/04/14 12:58:23 [DEBUG] nomad.periodic: registered periodic job "8474972c-452f-6c74-5118-2b17b84acc2d"
--- PASS: TestPeriodicDispatch_ForceRun_Tracked (0.00s)
2016/04/14 12:58:23 [DEBUG] nomad.periodic: registered periodic job "afe7e5ed-a13d-5ad2-10ec-8db2d1535968"
2016/04/14 12:58:23 [DEBUG] nomad.periodic: launching job "afe7e5ed-a13d-5ad2-10ec-8db2d1535968" in 552.676427ms
2016/04/14 12:58:23 [DEBUG] nomad.periodic: launching job "afe7e5ed-a13d-5ad2-10ec-8db2d1535968" in 552.398094ms
2016/04/14 12:58:23 [DEBUG] nomad.periodic: launching job "8474972c-452f-6c74-5118-2b17b84acc2d" in 9.552236761s
2016/04/14 12:58:23 [DEBUG] nomad.periodic: launching job "8474972c-452f-6c74-5118-2b17b84acc2d" in 9.552003094s
2016/04/14 12:58:24 [DEBUG] nomad.periodic: launching job afe7e5ed-a13d-5ad2-10ec-8db2d1535968 at 2016-04-14 12:58:24 +0000 UTC
2016/04/14 12:58:24 [DEBUG] nomad.periodic: launching job "afe7e5ed-a13d-5ad2-10ec-8db2d1535968" in 998.770094ms
2016/04/14 12:58:25 [DEBUG] nomad.periodic: skipping launch of periodic job "afe7e5ed-a13d-5ad2-10ec-8db2d1535968" because job prohibits overlap
--- PASS: TestPeriodicDispatch_Run_DisallowOverlaps (3.00s)
2016/04/14 12:58:26 [DEBUG] nomad.periodic: registered periodic job "b27b14a6-695b-7ddb-1d27-d7957d43bf34"
2016/04/14 12:58:26 [DEBUG] nomad.periodic: launching job "b27b14a6-695b-7ddb-1d27-d7957d43bf34" in 551.752427ms
2016/04/14 12:58:26 [DEBUG] nomad.periodic: launching job "b27b14a6-695b-7ddb-1d27-d7957d43bf34" in 551.541427ms
2016/04/14 12:58:27 [DEBUG] nomad.periodic: launching job b27b14a6-695b-7ddb-1d27-d7957d43bf34 at 2016-04-14 12:58:27 +0000 UTC
2016/04/14 12:58:27 [DEBUG] nomad.periodic: launching job "b27b14a6-695b-7ddb-1d27-d7957d43bf34" in 999.00776ms
2016/04/14 12:58:28 [DEBUG] nomad.periodic: launching job b27b14a6-695b-7ddb-1d27-d7957d43bf34 at 2016-04-14 12:58:28 +0000 UTC
--- PASS: TestPeriodicDispatch_Run_Multiple (3.00s)
2016/04/14 12:58:29 [DEBUG] nomad.periodic: registered periodic job "40cbf211-23ad-8ac2-c72b-6879aea25520"
2016/04/14 12:58:29 [DEBUG] nomad.periodic: registered periodic job "5d34f199-0037-d9c5-824e-bec53bf67541"
2016/04/14 12:58:29 [DEBUG] nomad.periodic: launching job "40cbf211-23ad-8ac2-c72b-6879aea25520" in 550.382427ms
2016/04/14 12:58:29 [DEBUG] nomad.periodic: launching job "40cbf211-23ad-8ac2-c72b-6879aea25520" in 550.114427ms
2016/04/14 12:58:30 [DEBUG] nomad.periodic: launching job 40cbf211-23ad-8ac2-c72b-6879aea25520 at 2016-04-14 12:58:30 +0000 UTC
2016/04/14 12:58:30 [DEBUG] nomad.periodic: launching job "5d34f199-0037-d9c5-824e-bec53bf67541" in -646.573µs
2016/04/14 12:58:30 [DEBUG] nomad.periodic: launching job 5d34f199-0037-d9c5-824e-bec53bf67541 at 2016-04-14 12:58:30 +0000 UTC
--- PASS: TestPeriodicDispatch_Run_SameTime (2.00s)
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "fa8c8d56-9f85-14fa-5bd7-c71a1314f0ca"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "9a13afa4-c8bc-94cc-59b5-66d3fdb73d61"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "a1fca12c-0e99-4f06-bc11-ff07db80a816"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "9d2db39b-e4e3-8233-cc9e-42fe8735d521"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "dc4ecf6c-36a1-acd5-81b0-bd40204a6ff3"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "0b2ce827-c4fe-5e96-2f30-1319d727db8d"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "b1a4131f-ac4c-69ca-bd71-ae2076ef7b7b"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: registered periodic job "45888cca-6168-2947-a5d1-aa89ac4f5d9c"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: deregistered periodic job "a1fca12c-0e99-4f06-bc11-ff07db80a816"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: deregistered periodic job "fa8c8d56-9f85-14fa-5bd7-c71a1314f0ca"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: deregistered periodic job "dc4ecf6c-36a1-acd5-81b0-bd40204a6ff3"
2016/04/14 12:58:31 [DEBUG] nomad.periodic: launching job "b1a4131f-ac4c-69ca-bd71-ae2076ef7b7b" in 546.644426ms
2016/04/14 12:58:31 [DEBUG] nomad.periodic: launching job "b1a4131f-ac4c-69ca-bd71-ae2076ef7b7b" in 546.360093ms
2016/04/14 12:58:32 [DEBUG] nomad.periodic: launching job b1a4131f-ac4c-69ca-bd71-ae2076ef7b7b at 2016-04-14 12:58:32 +0000 UTC
2016/04/14 12:58:32 [DEBUG] nomad.periodic: launching job "45888cca-6168-2947-a5d1-aa89ac4f5d9c" in -2.770574ms
2016/04/14 12:58:32 [DEBUG] nomad.periodic: launching job 45888cca-6168-2947-a5d1-aa89ac4f5d9c at 2016-04-14 12:58:32 +0000 UTC
2016/04/14 12:58:32 [DEBUG] nomad.periodic: launching job "9a13afa4-c8bc-94cc-59b5-66d3fdb73d61" in 994.866093ms
2016/04/14 12:58:33 [DEBUG] nomad.periodic: launching job 8474972c-452f-6c74-5118-2b17b84acc2d at 2016-04-14 12:58:33 +0000 UTC
2016/04/14 12:58:33 [DEBUG] nomad.periodic: launching job 9a13afa4-c8bc-94cc-59b5-66d3fdb73d61 at 2016-04-14 12:58:33 +0000 UTC
2016/04/14 12:58:33 [DEBUG] nomad.periodic: launching job "0b2ce827-c4fe-5e96-2f30-1319d727db8d" in 999.11476ms
2016/04/14 12:58:34 [DEBUG] nomad.periodic: launching job 0b2ce827-c4fe-5e96-2f30-1319d727db8d at 2016-04-14 12:58:34 +0000 UTC
2016/04/14 12:58:34 [DEBUG] nomad.periodic: launching job "9a13afa4-c8bc-94cc-59b5-66d3fdb73d61" in 999.233759ms
2016/04/14 12:58:35 [DEBUG] nomad.periodic: launching job 9a13afa4-c8bc-94cc-59b5-66d3fdb73d61 at 2016-04-14 12:58:35 +0000 UTC
--- PASS: TestPeriodicDispatch_Complex (5.00s)
--- PASS: TestPeriodicHeap_Order (0.00s)
PASS
ok  	github.com/hashicorp/nomad/nomad	38.936s
?   	github.com/hashicorp/nomad/nomad/mock	[no test files]
=== RUN   TestNotifyGroup
--- PASS: TestNotifyGroup (0.00s)
=== RUN   TestNotifyGroup_Clear
--- PASS: TestNotifyGroup_Clear (0.00s)
=== RUN   TestStateStoreSchema
--- PASS: TestStateStoreSchema (0.00s)
=== RUN   TestStateStore_UpsertNode_Node
--- PASS: TestStateStore_UpsertNode_Node (0.00s)
=== RUN   TestStateStore_DeleteNode_Node
--- PASS: TestStateStore_DeleteNode_Node (0.00s)
=== RUN   TestStateStore_UpdateNodeStatus_Node
--- PASS: TestStateStore_UpdateNodeStatus_Node (0.00s)
=== RUN   TestStateStore_UpdateNodeDrain_Node
--- PASS: TestStateStore_UpdateNodeDrain_Node (0.00s)
=== RUN   TestStateStore_Nodes
--- PASS: TestStateStore_Nodes (0.01s)
=== RUN   TestStateStore_NodesByIDPrefix
--- PASS: TestStateStore_NodesByIDPrefix (0.00s)
=== RUN   TestStateStore_RestoreNode
--- PASS: TestStateStore_RestoreNode (0.00s)
=== RUN   TestStateStore_UpsertJob_Job
--- PASS: TestStateStore_UpsertJob_Job (0.00s)
=== RUN   TestStateStore_UpdateUpsertJob_Job
--- PASS: TestStateStore_UpdateUpsertJob_Job (0.00s)
=== RUN   TestStateStore_DeleteJob_Job
--- PASS: TestStateStore_DeleteJob_Job (0.00s)
=== RUN   TestStateStore_Jobs
--- PASS: TestStateStore_Jobs (0.01s)
=== RUN   TestStateStore_JobsByIDPrefix
--- PASS: TestStateStore_JobsByIDPrefix (0.00s)
=== RUN   TestStateStore_JobsByPeriodic
--- PASS: TestStateStore_JobsByPeriodic (0.01s)
=== RUN   TestStateStore_JobsByScheduler
--- PASS: TestStateStore_JobsByScheduler (0.01s)
=== RUN   TestStateStore_JobsByGC
--- PASS: TestStateStore_JobsByGC (0.01s)
=== RUN   TestStateStore_RestoreJob
--- PASS: TestStateStore_RestoreJob (0.00s)
=== RUN   TestStateStore_UpsertPeriodicLaunch
--- PASS: TestStateStore_UpsertPeriodicLaunch (0.00s)
=== RUN   TestStateStore_UpdateUpsertPeriodicLaunch
--- PASS: TestStateStore_UpdateUpsertPeriodicLaunch (0.00s)
=== RUN   TestStateStore_DeletePeriodicLaunch
--- PASS: TestStateStore_DeletePeriodicLaunch (0.00s)
=== RUN   TestStateStore_PeriodicLaunches
--- PASS: TestStateStore_PeriodicLaunches (0.00s)
=== RUN   TestStateStore_RestorePeriodicLaunch
--- PASS: TestStateStore_RestorePeriodicLaunch (0.00s)
=== RUN   TestStateStore_Indexes
--- PASS: TestStateStore_Indexes (0.00s)
=== RUN   TestStateStore_RestoreIndex
--- PASS: TestStateStore_RestoreIndex (0.00s)
=== RUN   TestStateStore_UpsertEvals_Eval
--- PASS: TestStateStore_UpsertEvals_Eval (0.00s)
=== RUN   TestStateStore_Update_UpsertEvals_Eval
--- PASS: TestStateStore_Update_UpsertEvals_Eval (0.00s)
=== RUN   TestStateStore_DeleteEval_Eval
--- PASS: TestStateStore_DeleteEval_Eval (0.00s)
=== RUN   TestStateStore_EvalsByJob
--- PASS: TestStateStore_EvalsByJob (0.00s)
=== RUN   TestStateStore_Evals
--- PASS: TestStateStore_Evals (0.01s)
=== RUN   TestStateStore_EvalsByIDPrefix
--- PASS: TestStateStore_EvalsByIDPrefix (0.00s)
=== RUN   TestStateStore_RestoreEval
--- PASS: TestStateStore_RestoreEval (0.00s)
=== RUN   TestStateStore_UpdateAllocsFromClient
--- PASS: TestStateStore_UpdateAllocsFromClient (0.00s)
=== RUN   TestStateStore_UpsertAlloc_Alloc
--- PASS: TestStateStore_UpsertAlloc_Alloc (0.00s)
=== RUN   TestStateStore_UpdateAlloc_Alloc
--- PASS: TestStateStore_UpdateAlloc_Alloc (0.00s)
=== RUN   TestStateStore_EvictAlloc_Alloc
--- PASS: TestStateStore_EvictAlloc_Alloc (0.00s)
=== RUN   TestStateStore_AllocsByNode
--- PASS: TestStateStore_AllocsByNode (0.01s)
=== RUN   TestStateStore_AllocsByNodeTerminal
--- PASS: TestStateStore_AllocsByNodeTerminal (0.01s)
=== RUN   TestStateStore_AllocsByJob
--- PASS: TestStateStore_AllocsByJob (0.01s)
=== RUN   TestStateStore_AllocsByIDPrefix
--- PASS: TestStateStore_AllocsByIDPrefix (0.01s)
=== RUN   TestStateStore_Allocs
--- PASS: TestStateStore_Allocs (0.01s)
=== RUN   TestStateStore_RestoreAlloc
--- PASS: TestStateStore_RestoreAlloc (0.00s)
=== RUN   TestStateStore_SetJobStatus_ForceStatus
--- PASS: TestStateStore_SetJobStatus_ForceStatus (0.00s)
=== RUN   TestStateStore_SetJobStatus_NoOp
--- PASS: TestStateStore_SetJobStatus_NoOp (0.00s)
=== RUN   TestStateStore_SetJobStatus
--- PASS: TestStateStore_SetJobStatus (0.00s)
=== RUN   TestStateStore_GetJobStatus_NoEvalsOrAllocs
--- PASS: TestStateStore_GetJobStatus_NoEvalsOrAllocs (0.00s)
=== RUN   TestStateStore_GetJobStatus_NoEvalsOrAllocs_Periodic
--- PASS: TestStateStore_GetJobStatus_NoEvalsOrAllocs_Periodic (0.00s)
=== RUN   TestStateStore_GetJobStatus_NoEvalsOrAllocs_EvalDelete
--- PASS: TestStateStore_GetJobStatus_NoEvalsOrAllocs_EvalDelete (0.00s)
=== RUN   TestStateStore_GetJobStatus_DeadEvalsAndAllocs
--- PASS: TestStateStore_GetJobStatus_DeadEvalsAndAllocs (0.00s)
=== RUN   TestStateStore_GetJobStatus_RunningAlloc
--- PASS: TestStateStore_GetJobStatus_RunningAlloc (0.00s)
=== RUN   TestStateStore_SetJobStatus_PendingEval
--- PASS: TestStateStore_SetJobStatus_PendingEval (0.00s)
=== RUN   TestStateWatch_watch
--- PASS: TestStateWatch_watch (0.00s)
=== RUN   TestStateWatch_stopWatch
--- PASS: TestStateWatch_stopWatch (0.00s)
PASS
ok  	github.com/hashicorp/nomad/nomad/state	0.216s
=== RUN   TestBitmap
--- PASS: TestBitmap (0.00s)
=== RUN   TestRemoveAllocs
--- PASS: TestRemoveAllocs (0.00s)
=== RUN   TestFilterTerminalAllocs
--- PASS: TestFilterTerminalAllocs (0.00s)
=== RUN   TestAllocsFit_PortsOvercommitted
--- PASS: TestAllocsFit_PortsOvercommitted (0.00s)
=== RUN   TestAllocsFit
--- PASS: TestAllocsFit (0.00s)
=== RUN   TestScoreFit
--- PASS: TestScoreFit (0.00s)
=== RUN   TestGenerateUUID
--- PASS: TestGenerateUUID (0.05s)
=== RUN   TestNetworkIndex_Overcommitted
--- PASS: TestNetworkIndex_Overcommitted (0.00s)
=== RUN   TestNetworkIndex_SetNode
--- PASS: TestNetworkIndex_SetNode (0.00s)
=== RUN   TestNetworkIndex_AddAllocs
--- PASS: TestNetworkIndex_AddAllocs (0.00s)
=== RUN   TestNetworkIndex_AddReserved
--- PASS: TestNetworkIndex_AddReserved (0.00s)
=== RUN   TestNetworkIndex_yieldIP
--- PASS: TestNetworkIndex_yieldIP (0.00s)
=== RUN   TestNetworkIndex_AssignNetwork
--- PASS: TestNetworkIndex_AssignNetwork (0.00s)
=== RUN   TestIntContains
--- PASS: TestIntContains (0.00s)
=== RUN   TestNode_ComputedClass
--- PASS: TestNode_ComputedClass (0.00s)
=== RUN   TestNode_ComputedClass_Ignore
--- PASS: TestNode_ComputedClass_Ignore (0.00s)
=== RUN   TestNode_ComputedClass_Attr
--- PASS: TestNode_ComputedClass_Attr (0.00s)
=== RUN   TestNode_ComputedClass_Meta
--- PASS: TestNode_ComputedClass_Meta (0.00s)
=== RUN   TestNode_EscapedConstraints
--- PASS: TestNode_EscapedConstraints (0.00s)
=== RUN   TestJob_Validate
--- PASS: TestJob_Validate (0.00s)
=== RUN   TestJob_Copy
--- PASS: TestJob_Copy (0.00s)
=== RUN   TestJob_IsPeriodic
--- PASS: TestJob_IsPeriodic (0.00s)
=== RUN   TestTaskGroup_Validate
--- PASS: TestTaskGroup_Validate (0.00s)
=== RUN   TestTask_Validate
--- PASS: TestTask_Validate (0.00s)
=== RUN   TestTask_Validate_LogConfig
--- PASS: TestTask_Validate_LogConfig (0.00s)
=== RUN   TestConstraint_Validate
--- PASS: TestConstraint_Validate (0.00s)
=== RUN   TestResource_NetIndex
--- PASS: TestResource_NetIndex (0.00s)
=== RUN   TestResource_Superset
--- PASS: TestResource_Superset (0.00s)
=== RUN   TestResource_Add
--- PASS: TestResource_Add (0.00s)
=== RUN   TestResource_Add_Network
--- PASS: TestResource_Add_Network (0.00s)
=== RUN   TestEncodeDecode
--- PASS: TestEncodeDecode (0.00s)
=== RUN   TestInvalidServiceCheck
--- PASS: TestInvalidServiceCheck (0.01s)
=== RUN   TestDistinctCheckID
--- PASS: TestDistinctCheckID (0.00s)
=== RUN   TestService_InitFields
--- PASS: TestService_InitFields (0.00s)
=== RUN   TestJob_ExpandServiceNames
--- PASS: TestJob_ExpandServiceNames (0.00s)
=== RUN   TestPeriodicConfig_EnabledInvalid
--- PASS: TestPeriodicConfig_EnabledInvalid (0.00s)
=== RUN   TestPeriodicConfig_InvalidCron
--- PASS: TestPeriodicConfig_InvalidCron (0.00s)
=== RUN   TestPeriodicConfig_ValidCron
--- PASS: TestPeriodicConfig_ValidCron (0.00s)
=== RUN   TestPeriodicConfig_NextCron
--- PASS: TestPeriodicConfig_NextCron (0.00s)
=== RUN   TestRestartPolicy_Validate
--- PASS: TestRestartPolicy_Validate (0.00s)
=== RUN   TestAllocation_Index
--- PASS: TestAllocation_Index (0.00s)
=== RUN   TestTaskArtifact_Validate_Source
--- PASS: TestTaskArtifact_Validate_Source (0.00s)
=== RUN   TestTaskArtifact_Validate_Checksum
--- PASS: TestTaskArtifact_Validate_Checksum (0.00s)
PASS
ok  	github.com/hashicorp/nomad/nomad/structs	0.134s
=== RUN   TestWatchItems
--- PASS: TestWatchItems (0.00s)
PASS
ok  	github.com/hashicorp/nomad/nomad/watch	0.020s
=== RUN   TestEvalContext_ProposedAlloc
--- PASS: TestEvalContext_ProposedAlloc (0.00s)
=== RUN   TestEvalEligibility_JobStatus
false 
--- PASS: TestEvalEligibility_JobStatus (0.00s)
=== RUN   TestEvalEligibility_TaskGroupStatus
--- PASS: TestEvalEligibility_TaskGroupStatus (0.00s)
=== RUN   TestEvalEligibility_SetJob
--- PASS: TestEvalEligibility_SetJob (0.00s)
=== RUN   TestEvalEligibility_GetClasses
--- PASS: TestEvalEligibility_GetClasses (0.00s)
=== RUN   TestStaticIterator_Reset
--- PASS: TestStaticIterator_Reset (0.00s)
=== RUN   TestStaticIterator_SetNodes
--- PASS: TestStaticIterator_SetNodes (0.00s)
=== RUN   TestRandomIterator
--- PASS: TestRandomIterator (0.00s)
=== RUN   TestDriverChecker
--- PASS: TestDriverChecker (0.00s)
=== RUN   TestConstraintChecker
--- PASS: TestConstraintChecker (0.00s)
=== RUN   TestResolveConstraintTarget
--- PASS: TestResolveConstraintTarget (0.00s)
=== RUN   TestCheckConstraint
--- PASS: TestCheckConstraint (0.00s)
=== RUN   TestCheckLexicalOrder
--- PASS: TestCheckLexicalOrder (0.00s)
=== RUN   TestCheckVersionConstraint
--- PASS: TestCheckVersionConstraint (0.00s)
=== RUN   TestCheckRegexpConstraint
--- PASS: TestCheckRegexpConstraint (0.00s)
=== RUN   TestProposedAllocConstraint_JobDistinctHosts
--- PASS: TestProposedAllocConstraint_JobDistinctHosts (0.00s)
=== RUN   TestProposedAllocConstraint_JobDistinctHosts_Infeasible
--- PASS: TestProposedAllocConstraint_JobDistinctHosts_Infeasible (0.00s)
=== RUN   TestProposedAllocConstraint_JobDistinctHosts_InfeasibleCount
--- PASS: TestProposedAllocConstraint_JobDistinctHosts_InfeasibleCount (0.00s)
=== RUN   TestProposedAllocConstraint_TaskGroupDistinctHosts
--- PASS: TestProposedAllocConstraint_TaskGroupDistinctHosts (0.00s)
=== RUN   TestFeasibilityWrapper_JobIneligible
--- PASS: TestFeasibilityWrapper_JobIneligible (0.00s)
=== RUN   TestFeasibilityWrapper_JobEscapes
--- PASS: TestFeasibilityWrapper_JobEscapes (0.00s)
=== RUN   TestFeasibilityWrapper_JobAndTg_Eligible
--- PASS: TestFeasibilityWrapper_JobAndTg_Eligible (0.00s)
=== RUN   TestFeasibilityWrapper_JobEligible_TgIneligible
--- PASS: TestFeasibilityWrapper_JobEligible_TgIneligible (0.00s)
=== RUN   TestFeasibilityWrapper_JobEligible_TgEscaped
--- PASS: TestFeasibilityWrapper_JobEligible_TgEscaped (0.00s)
=== RUN   TestServiceSched_JobRegister
2016/04/14 12:58:26 [DEBUG] sched: <Eval '56d3b857-dffd-2e49-2b40-1a32c01f33c0' JobID: 'cfbfc66b-0823-3563-af82-7676bb00ad3b'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '56d3b857-dffd-2e49-2b40-1a32c01f33c0' JobID: 'cfbfc66b-0823-3563-af82-7676bb00ad3b'>: setting status to complete
--- PASS: TestServiceSched_JobRegister (0.02s)
=== RUN   TestServiceSched_JobRegister_AllocFail
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fef64309-aea7-2ec5-530a-c64ab056f94a' JobID: 'fc5e9dc9-d7b2-748c-b9f3-0d909f22d5ff'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fef64309-aea7-2ec5-530a-c64ab056f94a' JobID: 'fc5e9dc9-d7b2-748c-b9f3-0d909f22d5ff'>: failed to place all allocations, blocked eval '666d2941-a50b-ddf3-267b-eaa7ff8a9352' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fef64309-aea7-2ec5-530a-c64ab056f94a' JobID: 'fc5e9dc9-d7b2-748c-b9f3-0d909f22d5ff'>: setting status to complete
--- PASS: TestServiceSched_JobRegister_AllocFail (0.00s)
=== RUN   TestServiceSched_JobRegister_BlockedEval
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'b3b4d321-c833-abce-56cd-e35e71617a52' JobID: '28e3f580-e6df-d6e3-8d01-12daa74c48af'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'b3b4d321-c833-abce-56cd-e35e71617a52' JobID: '28e3f580-e6df-d6e3-8d01-12daa74c48af'>: failed to place all allocations, blocked eval 'dbecdbd1-b277-96bd-014b-ecc3be470909' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'b3b4d321-c833-abce-56cd-e35e71617a52' JobID: '28e3f580-e6df-d6e3-8d01-12daa74c48af'>: setting status to complete
--- PASS: TestServiceSched_JobRegister_BlockedEval (0.01s)
=== RUN   TestServiceSched_JobRegister_FeasibleAndInfeasibleTG
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f2d0560c-1ddf-b522-5c53-737747bc8be0' JobID: '712d5163-1417-2727-8150-2a8faeb65458'>: allocs: (place 4) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f2d0560c-1ddf-b522-5c53-737747bc8be0' JobID: '712d5163-1417-2727-8150-2a8faeb65458'>: failed to place all allocations, blocked eval '9fad2e07-c97c-069f-cef6-8de6710d27ef' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f2d0560c-1ddf-b522-5c53-737747bc8be0' JobID: '712d5163-1417-2727-8150-2a8faeb65458'>: setting status to complete
--- PASS: TestServiceSched_JobRegister_FeasibleAndInfeasibleTG (0.01s)
=== RUN   TestServiceSched_JobModify
2016/04/14 12:58:26 [DEBUG] sched: <Eval '8d5bc8c8-a5e5-29c0-99f3-3a93e4e1cce8' JobID: 'b4b378bc-455b-c74d-aca0-54d29343c9e2'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '8d5bc8c8-a5e5-29c0-99f3-3a93e4e1cce8' JobID: 'b4b378bc-455b-c74d-aca0-54d29343c9e2'>: 0 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '8d5bc8c8-a5e5-29c0-99f3-3a93e4e1cce8' JobID: 'b4b378bc-455b-c74d-aca0-54d29343c9e2'>: setting status to complete
--- PASS: TestServiceSched_JobModify (0.03s)
=== RUN   TestServiceSched_JobModify_Rolling
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1b1f7317-206a-fa85-49c7-df0f2a3c87cc' JobID: '1d365417-73b9-9ba6-0315-65a05b1be284'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1b1f7317-206a-fa85-49c7-df0f2a3c87cc' JobID: '1d365417-73b9-9ba6-0315-65a05b1be284'>: 0 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1b1f7317-206a-fa85-49c7-df0f2a3c87cc' JobID: '1d365417-73b9-9ba6-0315-65a05b1be284'>: rolling update limit reached, next eval '02511c51-7437-5f1d-4261-e6061107c0af' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1b1f7317-206a-fa85-49c7-df0f2a3c87cc' JobID: '1d365417-73b9-9ba6-0315-65a05b1be284'>: setting status to complete
--- PASS: TestServiceSched_JobModify_Rolling (0.02s)
=== RUN   TestServiceSched_JobModify_InPlace
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0434a962-2914-74eb-5392-ef17e7b7ed09' JobID: '50137bc3-9f95-d76a-1ae0-1f35804e2d0b'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0434a962-2914-74eb-5392-ef17e7b7ed09' JobID: '50137bc3-9f95-d76a-1ae0-1f35804e2d0b'>: 10 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0434a962-2914-74eb-5392-ef17e7b7ed09' JobID: '50137bc3-9f95-d76a-1ae0-1f35804e2d0b'>: setting status to complete
--- PASS: TestServiceSched_JobModify_InPlace (0.02s)
=== RUN   TestServiceSched_JobDeregister
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f30b539c-2b82-9abd-2f57-9008808bf928' JobID: '628b36be-2020-2155-e85b-bee5e54fff00'>: allocs: (place 0) (update 0) (migrate 0) (stop 10) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f30b539c-2b82-9abd-2f57-9008808bf928' JobID: '628b36be-2020-2155-e85b-bee5e54fff00'>: setting status to complete
--- PASS: TestServiceSched_JobDeregister (0.01s)
=== RUN   TestServiceSched_NodeDrain
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f8af37d6-65b5-32fb-e5f2-a7a81d29e325' JobID: '22b889fa-3914-b752-d627-95dcd058ea2b'>: allocs: (place 0) (update 0) (migrate 10) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'f8af37d6-65b5-32fb-e5f2-a7a81d29e325' JobID: '22b889fa-3914-b752-d627-95dcd058ea2b'>: setting status to complete
--- PASS: TestServiceSched_NodeDrain (0.03s)
=== RUN   TestServiceSched_NodeDrain_UpdateStrategy
2016/04/14 12:58:26 [DEBUG] sched: <Eval '694bf041-8b1f-13ff-9300-da6b74635d00' JobID: 'f26937b3-b64e-7163-3530-b256aa3ee840'>: allocs: (place 0) (update 0) (migrate 10) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '694bf041-8b1f-13ff-9300-da6b74635d00' JobID: 'f26937b3-b64e-7163-3530-b256aa3ee840'>: rolling update limit reached, next eval 'bf8ab482-74e9-1e5d-b161-9ada832257e2' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval '694bf041-8b1f-13ff-9300-da6b74635d00' JobID: 'f26937b3-b64e-7163-3530-b256aa3ee840'>: setting status to complete
--- PASS: TestServiceSched_NodeDrain_UpdateStrategy (0.02s)
=== RUN   TestServiceSched_RetryLimit
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4f30b439-9549-4a55-641f-885dc1b00e01' JobID: 'dc342b29-6e8d-a1e5-106a-4d84adf33ad1'>: setting status to failed
--- PASS: TestServiceSched_RetryLimit (0.05s)
=== RUN   TestBatchSched_Run_DeadAlloc
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'd6ba3108-3d08-53c1-13a8-8a80567122b2' JobID: '755eb186-7d21-2424-8342-2d03995fd0af'>: allocs: (place 0) (update 0) (migrate 0) (stop 0) (ignore 1)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'd6ba3108-3d08-53c1-13a8-8a80567122b2' JobID: '755eb186-7d21-2424-8342-2d03995fd0af'>: setting status to complete
--- PASS: TestBatchSched_Run_DeadAlloc (0.00s)
=== RUN   TestBatchSched_Run_FailedAlloc
2016/04/14 12:58:26 [DEBUG] sched: <Eval '3bf59229-4644-19d7-931e-309e0c938583' JobID: '37dbda1b-b84f-d942-0513-a2a34573800c'>: allocs: (place 1) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '3bf59229-4644-19d7-931e-309e0c938583' JobID: '37dbda1b-b84f-d942-0513-a2a34573800c'>: setting status to complete
--- PASS: TestBatchSched_Run_FailedAlloc (0.01s)
=== RUN   TestFeasibleRankIterator
--- PASS: TestFeasibleRankIterator (0.00s)
=== RUN   TestBinPackIterator_NoExistingAlloc
--- PASS: TestBinPackIterator_NoExistingAlloc (0.00s)
=== RUN   TestBinPackIterator_PlannedAlloc
--- PASS: TestBinPackIterator_PlannedAlloc (0.00s)
=== RUN   TestBinPackIterator_ExistingAlloc
--- PASS: TestBinPackIterator_ExistingAlloc (0.00s)
=== RUN   TestBinPackIterator_ExistingAlloc_PlannedEvict
--- PASS: TestBinPackIterator_ExistingAlloc_PlannedEvict (0.01s)
=== RUN   TestJobAntiAffinity_PlannedAlloc
--- PASS: TestJobAntiAffinity_PlannedAlloc (0.00s)
=== RUN   TestLimitIterator
--- PASS: TestLimitIterator (0.00s)
=== RUN   TestMaxScoreIterator
--- PASS: TestMaxScoreIterator (0.00s)
=== RUN   TestServiceStack_SetNodes
--- PASS: TestServiceStack_SetNodes (0.00s)
=== RUN   TestServiceStack_SetJob
--- PASS: TestServiceStack_SetJob (0.00s)
=== RUN   TestServiceStack_Select_Size
--- PASS: TestServiceStack_Select_Size (0.00s)
=== RUN   TestServiceStack_Select_MetricsReset
--- PASS: TestServiceStack_Select_MetricsReset (0.00s)
=== RUN   TestServiceStack_Select_DriverFilter
--- PASS: TestServiceStack_Select_DriverFilter (0.00s)
=== RUN   TestServiceStack_Select_ConstraintFilter
--- PASS: TestServiceStack_Select_ConstraintFilter (0.00s)
=== RUN   TestServiceStack_Select_BinPack_Overflow
--- PASS: TestServiceStack_Select_BinPack_Overflow (0.00s)
=== RUN   TestSystemStack_SetNodes
--- PASS: TestSystemStack_SetNodes (0.00s)
=== RUN   TestSystemStack_SetJob
--- PASS: TestSystemStack_SetJob (0.00s)
=== RUN   TestSystemStack_Select_Size
--- PASS: TestSystemStack_Select_Size (0.00s)
=== RUN   TestSystemStack_Select_MetricsReset
--- PASS: TestSystemStack_Select_MetricsReset (0.00s)
=== RUN   TestSystemStack_Select_DriverFilter
--- PASS: TestSystemStack_Select_DriverFilter (0.00s)
=== RUN   TestSystemStack_Select_ConstraintFilter
--- PASS: TestSystemStack_Select_ConstraintFilter (0.00s)
=== RUN   TestSystemStack_Select_BinPack_Overflow
--- PASS: TestSystemStack_Select_BinPack_Overflow (0.00s)
=== RUN   TestSystemSched_JobRegister
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4781e87f-b7c9-225d-01a3-9d01c94b9571' JobID: '961c30a1-b0a3-975c-2997-b3d028c058c3'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '4781e87f-b7c9-225d-01a3-9d01c94b9571' JobID: '961c30a1-b0a3-975c-2997-b3d028c058c3'>: setting status to complete
--- PASS: TestSystemSched_JobRegister (0.01s)
=== RUN   TestSystemSched_JobRegister_AddNode
2016/04/14 12:58:26 [DEBUG] sched: <Eval '13efc183-5022-8354-87b0-eb23ddfa7722' JobID: '3ca75e4d-c372-4f45-cbe6-50bf3ed024ba'>: allocs: (place 1) (update 0) (migrate 0) (stop 0) (ignore 10)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '13efc183-5022-8354-87b0-eb23ddfa7722' JobID: '3ca75e4d-c372-4f45-cbe6-50bf3ed024ba'>: setting status to complete
--- PASS: TestSystemSched_JobRegister_AddNode (0.01s)
=== RUN   TestSystemSched_JobRegister_AllocFail
2016/04/14 12:58:26 [DEBUG] sched: <Eval '9a21fd7c-045a-aadc-ace6-77e7c0420f46' JobID: '37bf0b80-1404-31cb-0808-be94c6cc42c5'>: allocs: (place 0) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '9a21fd7c-045a-aadc-ace6-77e7c0420f46' JobID: '37bf0b80-1404-31cb-0808-be94c6cc42c5'>: setting status to complete
--- PASS: TestSystemSched_JobRegister_AllocFail (0.00s)
=== RUN   TestSystemSched_JobModify
2016/04/14 12:58:26 [DEBUG] sched: <Eval '88255ea3-b628-a5c6-7d44-251979da97e7' JobID: '40351e62-dd5a-5a17-19d6-27ae2c55200f'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '88255ea3-b628-a5c6-7d44-251979da97e7' JobID: '40351e62-dd5a-5a17-19d6-27ae2c55200f'>: 0 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '88255ea3-b628-a5c6-7d44-251979da97e7' JobID: '40351e62-dd5a-5a17-19d6-27ae2c55200f'>: setting status to complete
--- PASS: TestSystemSched_JobModify (0.03s)
=== RUN   TestSystemSched_JobModify_Rolling
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0e1ebef4-bdc3-f29e-057a-3d26f1f680fc' JobID: '0b638db3-dee6-2c96-95f8-4dd1e7befa8e'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0e1ebef4-bdc3-f29e-057a-3d26f1f680fc' JobID: '0b638db3-dee6-2c96-95f8-4dd1e7befa8e'>: 0 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0e1ebef4-bdc3-f29e-057a-3d26f1f680fc' JobID: '0b638db3-dee6-2c96-95f8-4dd1e7befa8e'>: rolling update limit reached, next eval '60b29efd-4529-3648-2ca7-6ad713ed57f3' created
2016/04/14 12:58:26 [DEBUG] sched: <Eval '0e1ebef4-bdc3-f29e-057a-3d26f1f680fc' JobID: '0b638db3-dee6-2c96-95f8-4dd1e7befa8e'>: setting status to complete
--- PASS: TestSystemSched_JobModify_Rolling (0.02s)
=== RUN   TestSystemSched_JobModify_InPlace
2016/04/14 12:58:26 [DEBUG] sched: <Eval '6d4e0df8-6791-89b3-bad6-07c67aebc470' JobID: '4799d702-bd4d-35ed-ca3d-8138a0af92c5'>: allocs: (place 0) (update 10) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '6d4e0df8-6791-89b3-bad6-07c67aebc470' JobID: '4799d702-bd4d-35ed-ca3d-8138a0af92c5'>: 10 in-place updates of 10
2016/04/14 12:58:26 [DEBUG] sched: <Eval '6d4e0df8-6791-89b3-bad6-07c67aebc470' JobID: '4799d702-bd4d-35ed-ca3d-8138a0af92c5'>: setting status to complete
--- PASS: TestSystemSched_JobModify_InPlace (0.02s)
=== RUN   TestSystemSched_JobDeregister
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'b8bbcf6b-c532-a46b-04f3-43a1b795b925' JobID: 'ef9c4d6b-f131-c547-b462-2ed385e03a3c'>: allocs: (place 0) (update 0) (migrate 0) (stop 10) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'b8bbcf6b-c532-a46b-04f3-43a1b795b925' JobID: 'ef9c4d6b-f131-c547-b462-2ed385e03a3c'>: setting status to complete
--- PASS: TestSystemSched_JobDeregister (0.02s)
=== RUN   TestSystemSched_NodeDrain
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1ed2552f-ba6c-d565-f404-00162f46cdaf' JobID: 'e27b7363-54f2-0685-aa3a-5e3417dfe988'>: allocs: (place 0) (update 0) (migrate 0) (stop 1) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval '1ed2552f-ba6c-d565-f404-00162f46cdaf' JobID: 'e27b7363-54f2-0685-aa3a-5e3417dfe988'>: setting status to complete
--- PASS: TestSystemSched_NodeDrain (0.00s)
=== RUN   TestSystemSched_RetryLimit
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: allocs: (place 10) (update 0) (migrate 0) (stop 0) (ignore 0)
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: refresh forced
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'fe87a549-7223-c0f0-0f99-3294152525a0' JobID: '89c4a960-c63a-5329-3a27-2c13a264bf45'>: setting status to failed
--- PASS: TestSystemSched_RetryLimit (0.03s)
=== RUN   TestMaterializeTaskGroups
--- PASS: TestMaterializeTaskGroups (0.00s)
=== RUN   TestDiffAllocs
--- PASS: TestDiffAllocs (0.00s)
=== RUN   TestDiffSystemAllocs
--- PASS: TestDiffSystemAllocs (0.00s)
=== RUN   TestReadyNodesInDCs
--- PASS: TestReadyNodesInDCs (0.00s)
=== RUN   TestRetryMax
--- PASS: TestRetryMax (0.00s)
=== RUN   TestTaintedNodes
--- PASS: TestTaintedNodes (0.00s)
=== RUN   TestShuffleNodes
--- PASS: TestShuffleNodes (0.00s)
=== RUN   TestTasksUpdated
--- PASS: TestTasksUpdated (0.00s)
=== RUN   TestEvictAndPlace_LimitLessThanAllocs
--- PASS: TestEvictAndPlace_LimitLessThanAllocs (0.00s)
=== RUN   TestEvictAndPlace_LimitEqualToAllocs
--- PASS: TestEvictAndPlace_LimitEqualToAllocs (0.00s)
=== RUN   TestSetStatus
2016/04/14 12:58:26 [DEBUG] sched: <Eval '44a84687-f1b8-494c-4b70-a8d4b7a0450b' JobID: 'c709e231-da72-1ce7-de95-b53899b269a2'>: setting status to a
2016/04/14 12:58:26 [DEBUG] sched: <Eval '44a84687-f1b8-494c-4b70-a8d4b7a0450b' JobID: 'c709e231-da72-1ce7-de95-b53899b269a2'>: setting status to a
--- PASS: TestSetStatus (0.00s)
=== RUN   TestInplaceUpdate_ChangedTaskGroup
2016/04/14 12:58:26 [DEBUG] sched: <Eval 'a3aa986f-ac67-7cc0-f5ca-2a9fa746c60e' JobID: 'f05de825-567a-976d-5a24-40308fd276ec'>: 0 in-place updates of 1
--- PASS: TestInplaceUpdate_ChangedTaskGroup (0.00s)
=== RUN   TestInplaceUpdate_NoMatch
2016/04/14 12:58:26 [DEBUG] sched: <Eval '35e55ec9-a3b8-a9ef-8212-c873a4ac56d8' JobID: '46b2192b-5c45-8eaf-6c01-c97a2fd0244e'>: 0 in-place updates of 1
--- PASS: TestInplaceUpdate_NoMatch (0.00s)
=== RUN   TestInplaceUpdate_Success
2016/04/14 12:58:26 [DEBUG] sched: <Eval '3f1b61ab-e4e5-330c-1152-843bface21b9' JobID: 'ddbb71d4-09bb-36fe-3410-8a2249167d3f'>: 1 in-place updates of 1
--- PASS: TestInplaceUpdate_Success (0.00s)
=== RUN   TestEvictAndPlace_LimitGreaterThanAllocs
--- PASS: TestEvictAndPlace_LimitGreaterThanAllocs (0.00s)
=== RUN   TestTaskGroupConstraints
--- PASS: TestTaskGroupConstraints (0.00s)
=== RUN   TestProgressMade
--- PASS: TestProgressMade (0.00s)
PASS
ok  	github.com/hashicorp/nomad/scheduler	0.579s
testing: warning: no tests to run
PASS
ok  	github.com/hashicorp/nomad/testutil	0.056s
dh_auto_test: go test -v github.com/hashicorp/nomad github.com/hashicorp/nomad/api github.com/hashicorp/nomad/client github.com/hashicorp/nomad/client/allocdir github.com/hashicorp/nomad/client/config github.com/hashicorp/nomad/client/driver github.com/hashicorp/nomad/client/driver/env github.com/hashicorp/nomad/client/driver/executor github.com/hashicorp/nomad/client/driver/logging github.com/hashicorp/nomad/client/driver/structs github.com/hashicorp/nomad/client/fingerprint github.com/hashicorp/nomad/client/getter github.com/hashicorp/nomad/client/testutil github.com/hashicorp/nomad/command github.com/hashicorp/nomad/command/agent github.com/hashicorp/nomad/helper/args github.com/hashicorp/nomad/helper/discover github.com/hashicorp/nomad/helper/flag-slice github.com/hashicorp/nomad/helper/gated-writer github.com/hashicorp/nomad/helper/testtask github.com/hashicorp/nomad/jobspec github.com/hashicorp/nomad/nomad github.com/hashicorp/nomad/nomad/mock github.com/hashicorp/nomad/nomad/state github.com/hashicorp/nomad/nomad/structs github.com/hashicorp/nomad/nomad/watch github.com/hashicorp/nomad/scheduler github.com/hashicorp/nomad/testutil returned exit code 1
debian/rules:20: recipe for target 'override_dh_auto_test' failed
make[1]: [override_dh_auto_test] Error 1 (ignored)
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
 fakeroot debian/rules binary-arch
dh binary-arch --buildsystem=golang --with=golang,systemd
   dh_testroot -a -O--buildsystem=golang
   dh_prep -a -O--buildsystem=golang
   dh_installdirs -a -O--buildsystem=golang
   debian/rules override_dh_auto_install
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_auto_install --destdir=debian/tmp
	mkdir -p /<<BUILDDIR>>/nomad-0.3.1\+dfsg/debian/tmp/usr
	cp -r bin /<<BUILDDIR>>/nomad-0.3.1\+dfsg/debian/tmp/usr
	mkdir -p /<<BUILDDIR>>/nomad-0.3.1\+dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/nomad
	cp -r -T src/github.com/hashicorp/nomad /<<BUILDDIR>>/nomad-0.3.1\+dfsg/debian/tmp/usr/share/gocode/src/github.com/hashicorp/nomad
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
   dh_install -a -O--buildsystem=golang
   dh_installdocs -a -O--buildsystem=golang
   dh_installchangelogs -a -O--buildsystem=golang
   dh_installexamples -a -O--buildsystem=golang
   debian/rules override_dh_systemd_enable
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_systemd_enable --no-enable
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
   debian/rules override_dh_installinit
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_installinit --no-start
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
   debian/rules override_dh_systemd_start
make[1]: Entering directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
dh_systemd_start --no-start
make[1]: Leaving directory '/<<BUILDDIR>>/nomad-0.3.1+dfsg'
   dh_perl -a -O--buildsystem=golang
   dh_link -a -O--buildsystem=golang
   dh_strip_nondeterminism -a -O--buildsystem=golang
   dh_compress -a -O--buildsystem=golang
   dh_fixperms -a -O--buildsystem=golang
   dh_strip -a -O--buildsystem=golang
   dh_makeshlibs -a -O--buildsystem=golang
   dh_shlibdeps -a -O--buildsystem=golang
   dh_installdeb -a -O--buildsystem=golang
   dh_golang -a -O--buildsystem=golang
   dh_gencontrol -a -O--buildsystem=golang
dpkg-gencontrol: warning: File::FcntlLock not available; using flock which is not NFS-safe
dpkg-gencontrol: warning: File::FcntlLock not available; using flock which is not NFS-safe
   dh_md5sums -a -O--buildsystem=golang
   dh_builddeb -u-Zxz -a -O--buildsystem=golang
dpkg-deb: building package 'nomad-dbgsym' in '../nomad-dbgsym_0.3.1+dfsg-1_armhf.deb'.
dpkg-deb: building package 'nomad' in '../nomad_0.3.1+dfsg-1_armhf.deb'.
 dpkg-genchanges -B -mRaspbian wandboard test autobuilder <root@raspbian.org> >../nomad_0.3.1+dfsg-1_armhf.changes
dpkg-genchanges: warning: package nomad-dbgsym listed in files list but not in control info
dpkg-genchanges: binary-only arch-specific upload (source code and arch-indep packages not included)
 dpkg-source --after-build nomad-0.3.1+dfsg
dpkg-buildpackage: binary-only upload (no source included)
--------------------------------------------------------------------------------
Build finished at 20160414-1302

Finished
--------

I: Built successfully

+------------------------------------------------------------------------------+
| Post Build Chroot                                                            |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Changes                                                                      |
+------------------------------------------------------------------------------+


nomad_0.3.1+dfsg-1_armhf.changes:
---------------------------------

Format: 1.8
Date: Tue, 22 Mar 2016 04:09:16 +1100
Source: nomad
Binary: nomad
Architecture: armhf
Version: 0.3.1+dfsg-1
Distribution: stretch-staging
Urgency: medium
Maintainer: Raspbian wandboard test autobuilder <root@raspbian.org>
Changed-By: Dmitry Smirnov <onlyjob@debian.org>
Description:
 nomad      - distributed, highly available, datacenter-aware scheduler
Closes: 818296
Changes:
 nomad (0.3.1+dfsg-1) unstable; urgency=medium
 .
   * Initial release (Closes: #818296).
Checksums-Sha1:
 05a7c1af070fa3df371fb18cd4031d9f5fc862b8 1622768 nomad-dbgsym_0.3.1+dfsg-1_armhf.deb
 9644d53e27127c0e302262a47539696be8fe58b0 3276234 nomad_0.3.1+dfsg-1_armhf.deb
Checksums-Sha256:
 b2be7e38502b5206dc8868f9e892ac985d127286ac13e08f741e0c9bf23d0109 1622768 nomad-dbgsym_0.3.1+dfsg-1_armhf.deb
 ed22057461b32d366761d2df043a080c8df29d9fcb3efcc96a7ffa913fe8f03e 3276234 nomad_0.3.1+dfsg-1_armhf.deb
Files:
 750f7a67f6e23b0b98828b1372e159b4 1622768 debug extra nomad-dbgsym_0.3.1+dfsg-1_armhf.deb
 b0151678facd38f433a05c075098c04f 3276234 devel extra nomad_0.3.1+dfsg-1_armhf.deb

+------------------------------------------------------------------------------+
| Package contents                                                             |
+------------------------------------------------------------------------------+


nomad-dbgsym_0.3.1+dfsg-1_armhf.deb
-----------------------------------

 new debian package, version 2.0.
 size 1622768 bytes: control archive=1392 bytes.
    3386 bytes,    14 lines      control              
     106 bytes,     1 lines      md5sums              
 Package: nomad-dbgsym
 Source: nomad
 Version: 0.3.1+dfsg-1
 Architecture: armhf
 Maintainer: Dmitry Smirnov <onlyjob@debian.org>
 Installed-Size: 2746
 Depends: nomad (= 0.3.1+dfsg-1)
 Built-Using: consul (= 0.6.3~dfsg-2), docker.io (= 1.8.3~ds1-2), golang (= 2:1.6-1+rpi1), golang-dbus (= 3-1), golang-github-armon-go-metrics (= 0.0~git20151207.0.06b6099-1), golang-github-armon-go-radix (= 0.0~git20150602.0.fbd82e8-1), golang-github-boltdb-bolt (= 1.2.0-1), golang-github-coreos-go-systemd (= 5-1), golang-github-datadog-datadog-go (= 0.0~git20150930.0.b050cd8-1), golang-github-docker-go-units (= 0.3.0-1), golang-github-dustin-go-humanize (= 0.0~git20151125.0.8929fe9-1), golang-github-fsouza-go-dockerclient (= 0.0+git20160316-1), golang-github-go-ini-ini (= 1.8.6-2), golang-github-gorhill-cronexpr (= 1.0.0-1), golang-github-hashicorp-errwrap (= 0.0~git20141028.0.7554cd9-1), golang-github-hashicorp-go-checkpoint (= 0.0~git20151022.0.e4b2dc3-1), golang-github-hashicorp-go-cleanhttp (= 0.0~git20160217.0.875fb67-1), golang-github-hashicorp-go-getter (= 0.0~git20160316.0.575ec4e-1), golang-github-hashicorp-go-immutable-radix (= 0.0~git20160222.0.8e8ed81-1), golang-github-hashicorp-go-memdb (= 0.0~git20160301.0.98f52f5-1), golang-github-hashicorp-go-msgpack (= 0.0~git20150518-1), golang-github-hashicorp-go-multierror (= 0.0~git20150916.0.d30f099-1), golang-github-hashicorp-go-plugin (= 0.0~git20160212.0.cccb4a1-1), golang-github-hashicorp-go-syslog (= 0.0~git20150218.0.42a2b57-1), golang-github-hashicorp-go-version (= 0.0~git20150915.0.2b9865f-1), golang-github-hashicorp-golang-lru (= 0.0~git20160207.0.a0d98a5-1), golang-github-hashicorp-hcl (= 0.0~git20151110.0.fa160f1-1), golang-github-hashicorp-logutils (= 0.0~git20150609.0.0dc08b1-1), golang-github-hashicorp-memberlist (= 0.0~git20160225.0.ae9a8d9-1), golang-github-hashicorp-net-rpc-msgpackrpc (= 0.0~git20151116.0.a14192a-1), golang-github-hashicorp-raft (= 0.0~git20160317.0.3359516-1), golang-github-hashicorp-raft-boltdb (= 0.0~git20150201.d1e82c1-1), golang-github-hashicorp-scada-client (= 0.0~git20150828.0.84989fd-1), golang-github-hashicorp-serf (= 0.7.0~ds1-1), golang-github-hashicorp-yamux (= 0.0~git20151129.0.df94978-1), golang-github-jmespath-go-jmespath (= 0.2.2-1), golang-github-mattn-go-isatty (= 0.0.1-1), golang-github-mitchellh-cli (= 0.0~git20160203.0.5c87c51-1), golang-github-mitchellh-copystructure (= 0.0~git20160128.0.80adcec-1), golang-github-mitchellh-hashstructure (= 0.0~git20160209.0.6b17d66-1), golang-github-mitchellh-mapstructure (= 0.0~git20150717.0.281073e-2), golang-github-mitchellh-reflectwalk (= 0.0~git20150527.0.eecf4c7-1), golang-github-prometheus-client-model (= 0.0.2+git20150212.12.fa8ad6f-1), golang-github-prometheus-common (= 0+git20160321.4045694-1), golang-github-ryanuber-columnize (= 2.1.0-1), golang-github-shirou-gopsutil (= 1.0.0+git20160112-1), golang-github-ugorji-go-codec (= 0.0~git20151130.0.357a44b-1), golang-golang-x-sys (= 0.0~git20150612-1), golang-goprotobuf (= 0.0~git20150526-2), golang-osext (= 0.0~git20151124.0.10da294-2), golang-prometheus-client (= 0.7.0+ds-3), golang-protobuf-extensions (= 0+git20150513.fc2b8d3-4), runc (= 0.0.8+dfsg-2)
 Section: debug
 Priority: extra
 Homepage: https://github.com/hashicorp/nomad
 Description: Debug symbols for nomad
 Auto-Built-Package: debug-symbols
 Build-Ids: e86867d4f2c8b8ac76b78973018e5faa43bdca94

drwxr-xr-x root/root         0 2016-04-14 13:00 ./
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/lib/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/lib/debug/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/lib/debug/.build-id/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/lib/debug/.build-id/e8/
-rw-r--r-- root/root   2801592 2016-04-14 13:00 ./usr/lib/debug/.build-id/e8/6867d4f2c8b8ac76b78973018e5faa43bdca94.debug
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/
lrwxrwxrwx root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad-dbgsym -> nomad


nomad_0.3.1+dfsg-1_armhf.deb
----------------------------

 new debian package, version 2.0.
 size 3276234 bytes: control archive=5549 bytes.
      18 bytes,     1 lines      conffiles            
    5356 bytes,    48 lines      control              
    7419 bytes,    90 lines      md5sums              
     862 bytes,    24 lines   *  postinst             #!/bin/sh
     756 bytes,    28 lines   *  postrm               #!/bin/sh
 Package: nomad
 Version: 0.3.1+dfsg-1
 Architecture: armhf
 Maintainer: Dmitry Smirnov <onlyjob@debian.org>
 Installed-Size: 15400
 Depends: libc6 (>= 2.4), init-system-helpers (>= 1.18~), pipexec
 Built-Using: consul (= 0.6.3~dfsg-2), docker.io (= 1.8.3~ds1-2), golang (= 2:1.6-1+rpi1), golang-dbus (= 3-1), golang-github-armon-go-metrics (= 0.0~git20151207.0.06b6099-1), golang-github-armon-go-radix (= 0.0~git20150602.0.fbd82e8-1), golang-github-boltdb-bolt (= 1.2.0-1), golang-github-coreos-go-systemd (= 5-1), golang-github-datadog-datadog-go (= 0.0~git20150930.0.b050cd8-1), golang-github-docker-go-units (= 0.3.0-1), golang-github-dustin-go-humanize (= 0.0~git20151125.0.8929fe9-1), golang-github-fsouza-go-dockerclient (= 0.0+git20160316-1), golang-github-go-ini-ini (= 1.8.6-2), golang-github-gorhill-cronexpr (= 1.0.0-1), golang-github-hashicorp-errwrap (= 0.0~git20141028.0.7554cd9-1), golang-github-hashicorp-go-checkpoint (= 0.0~git20151022.0.e4b2dc3-1), golang-github-hashicorp-go-cleanhttp (= 0.0~git20160217.0.875fb67-1), golang-github-hashicorp-go-getter (= 0.0~git20160316.0.575ec4e-1), golang-github-hashicorp-go-immutable-radix (= 0.0~git20160222.0.8e8ed81-1), golang-github-hashicorp-go-memdb (= 0.0~git20160301.0.98f52f5-1), golang-github-hashicorp-go-msgpack (= 0.0~git20150518-1), golang-github-hashicorp-go-multierror (= 0.0~git20150916.0.d30f099-1), golang-github-hashicorp-go-plugin (= 0.0~git20160212.0.cccb4a1-1), golang-github-hashicorp-go-syslog (= 0.0~git20150218.0.42a2b57-1), golang-github-hashicorp-go-version (= 0.0~git20150915.0.2b9865f-1), golang-github-hashicorp-golang-lru (= 0.0~git20160207.0.a0d98a5-1), golang-github-hashicorp-hcl (= 0.0~git20151110.0.fa160f1-1), golang-github-hashicorp-logutils (= 0.0~git20150609.0.0dc08b1-1), golang-github-hashicorp-memberlist (= 0.0~git20160225.0.ae9a8d9-1), golang-github-hashicorp-net-rpc-msgpackrpc (= 0.0~git20151116.0.a14192a-1), golang-github-hashicorp-raft (= 0.0~git20160317.0.3359516-1), golang-github-hashicorp-raft-boltdb (= 0.0~git20150201.d1e82c1-1), golang-github-hashicorp-scada-client (= 0.0~git20150828.0.84989fd-1), golang-github-hashicorp-serf (= 0.7.0~ds1-1), golang-github-hashicorp-yamux (= 0.0~git20151129.0.df94978-1), golang-github-jmespath-go-jmespath (= 0.2.2-1), golang-github-mattn-go-isatty (= 0.0.1-1), golang-github-mitchellh-cli (= 0.0~git20160203.0.5c87c51-1), golang-github-mitchellh-copystructure (= 0.0~git20160128.0.80adcec-1), golang-github-mitchellh-hashstructure (= 0.0~git20160209.0.6b17d66-1), golang-github-mitchellh-mapstructure (= 0.0~git20150717.0.281073e-2), golang-github-mitchellh-reflectwalk (= 0.0~git20150527.0.eecf4c7-1), golang-github-prometheus-client-model (= 0.0.2+git20150212.12.fa8ad6f-1), golang-github-prometheus-common (= 0+git20160321.4045694-1), golang-github-ryanuber-columnize (= 2.1.0-1), golang-github-shirou-gopsutil (= 1.0.0+git20160112-1), golang-github-ugorji-go-codec (= 0.0~git20151130.0.357a44b-1), golang-golang-x-sys (= 0.0~git20150612-1), golang-goprotobuf (= 0.0~git20150526-2), golang-osext (= 0.0~git20151124.0.10da294-2), golang-prometheus-client (= 0.7.0+ds-3), golang-protobuf-extensions (= 0+git20150513.fc2b8d3-4), runc (= 0.0.8+dfsg-2)
 Section: devel
 Priority: extra
 Homepage: https://github.com/hashicorp/nomad
 Description: distributed, highly available, datacenter-aware scheduler
  Nomad is a cluster manager, designed for both long lived services and
  short lived batch processing workloads. Developers use a declarative job
  specification to submit work, and Nomad ensures constraints are satisfied
  and resource utilization is optimized by efficient task packing. Nomad
  supports all major operating systems and virtualized, containerized,
  or standalone applications.
  The key features of Nomad are:
  .
   * Docker Support: Jobs can specify tasks which are Docker containers.
   Nomad will automatically run the containers on clients which have Docker
   installed, scale up and down based on the number of instances request,
   and automatically recover from failures.
  .
   * Multi-Datacenter and Multi-Region Aware: Nomad is designed to be a
   global-scale scheduler. Multiple datacenters can be managed as part of a
   larger region, and jobs can be scheduled across datacenters if requested.
   Multiple regions join together and federate jobs making it easy to run
   jobs anywhere.
  .
   * Operationally Simple:
   Nomad runs as a single binary that can be either a client or server, and
   is completely self contained. Nomad does not require any external
   services for storage or coordination. This means Nomad combines the
   features of a resource manager and scheduler in a single system.
  .
   * Distributed and Highly-Available: Nomad servers cluster together and
   perform leader election and state replication to provide high
   availability in the face of failure. The Nomad scheduling engine is
   optimized for optimistic concurrency allowing all servers to make
   scheduling decisions to maximize throughput.
  .
   * HashiCorp Ecosystem: Nomad integrates with the entire HashiCorp
   ecosystem of tools. Along with all HashiCorp tools, Nomad is designed in
   the unix philosophy of doing something specific and doing it well.  Nomad
   integrates with tools like Packer, Consul, and Terraform to support
   building artifacts, service discovery, monitoring and capacity
   management.

drwxr-xr-x root/root         0 2016-04-14 13:00 ./
drwxr-xr-x root/root         0 2016-04-14 13:00 ./etc/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./etc/init.d/
-rwxr-xr-x root/root      2301 2016-03-20 11:29 ./etc/init.d/nomad
drwxr-xr-x root/root         0 2016-04-14 13:00 ./etc/nomad/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./lib/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./lib/systemd/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./lib/systemd/system/
-rw-r--r-- root/root       280 2016-03-20 11:46 ./lib/systemd/system/nomad.service
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/bin/
-rwxr-xr-x root/root  15411340 2016-04-14 13:00 ./usr/bin/nomad
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/
-rw-r--r-- root/root       240 2016-03-21 17:06 ./usr/share/doc/nomad/README.Debian
-rw-r--r-- root/root      1245 2016-03-16 17:45 ./usr/share/doc/nomad/README.md
-rw-r--r-- root/root       162 2016-03-21 17:14 ./usr/share/doc/nomad/changelog.Debian.gz
-rw-r--r-- root/root      5391 2016-03-16 17:45 ./usr/share/doc/nomad/changelog.gz
-rw-r--r-- root/root     20679 2016-03-20 09:21 ./usr/share/doc/nomad/copyright
drwxr-xr-x root/root         0 2016-03-16 17:45 ./usr/share/doc/nomad/docs/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/agent/
-rw-r--r-- root/root     21170 2016-03-16 17:45 ./usr/share/doc/nomad/docs/agent/config.html.md
-rw-r--r-- root/root      7064 2016-03-16 17:45 ./usr/share/doc/nomad/docs/agent/index.html.md
-rw-r--r-- root/root      4075 2016-03-16 17:45 ./usr/share/doc/nomad/docs/agent/telemetry.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/commands/
-rw-r--r-- root/root      1650 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/agent-info.html.md.erb
-rw-r--r-- root/root       643 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/agent.html.md.erb
-rw-r--r-- root/root      2738 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/alloc-status.html.md.erb
-rw-r--r-- root/root      1722 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/client-config.html.md.erb
-rw-r--r-- root/root      1689 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/eval-monitor.html.md.erb
-rw-r--r-- root/root      1679 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/fs.html.md.erb
-rw-r--r-- root/root      1588 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/index.html.md.erb
-rw-r--r-- root/root       608 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/init.html.md.erb
-rw-r--r-- root/root      1083 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/node-drain.html.md.erb
-rw-r--r-- root/root      2020 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/node-status.html.md.erb
-rw-r--r-- root/root      2541 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/run.html.md.erb
-rw-r--r-- root/root       765 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/server-force-leave.html.md.erb
-rw-r--r-- root/root       975 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/server-join.html.md.erb
-rw-r--r-- root/root      1304 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/server-members.html.md.erb
-rw-r--r-- root/root      1812 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/status.html.md.erb
-rw-r--r-- root/root      1551 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/stop.html.md.erb
-rw-r--r-- root/root       720 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/validate.html.md.erb
-rw-r--r-- root/root       707 2016-03-16 17:45 ./usr/share/doc/nomad/docs/commands/version.html.md.erb
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/drivers/
-rw-r--r-- root/root       535 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/custom.html.md
-rw-r--r-- root/root     12262 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/docker.html.md
-rw-r--r-- root/root      3099 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/exec.html.md
-rw-r--r-- root/root      1017 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/index.html.md
-rw-r--r-- root/root      2890 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/java.html.md
-rw-r--r-- root/root      3075 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/qemu.html.md
-rw-r--r-- root/root      2423 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/raw_exec.html.md
-rw-r--r-- root/root      2511 2016-03-16 17:45 ./usr/share/doc/nomad/docs/drivers/rkt.html.md
-rw-r--r-- root/root      1781 2016-03-16 17:45 ./usr/share/doc/nomad/docs/faq.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/http/
-rw-r--r-- root/root       973 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/agent-force-leave.html.md
-rw-r--r-- root/root      1021 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/agent-join.html.md
-rw-r--r-- root/root      1241 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/agent-members.html.md
-rw-r--r-- root/root      3848 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/agent-self.html.md
-rw-r--r-- root/root      1560 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/agent-servers.html.md
-rw-r--r-- root/root      6666 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/alloc.html.md
-rw-r--r-- root/root      1834 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/allocs.html.md
-rw-r--r-- root/root      1118 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/client-fs-cat.html.md
-rw-r--r-- root/root      1374 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/client-fs-ls.html.md
-rw-r--r-- root/root      1150 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/client-fs-stat.html.md
-rw-r--r-- root/root      2112 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/eval.html.md
-rw-r--r-- root/root      1489 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/evals.html.md
-rw-r--r-- root/root      4809 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/index.html.md
-rw-r--r-- root/root      7585 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/job.html.md
-rw-r--r-- root/root      1985 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/jobs.html.md
-rw-r--r-- root/root      8784 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/node.html.md
-rw-r--r-- root/root      1244 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/nodes.html.md
-rw-r--r-- root/root       493 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/regions.html.md
-rw-r--r-- root/root      1007 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/status.html.md
-rw-r--r-- root/root       704 2016-03-16 17:45 ./usr/share/doc/nomad/docs/http/system.html.md
-rw-r--r-- root/root       517 2016-03-16 17:45 ./usr/share/doc/nomad/docs/index.html.markdown
drwxr-xr-x root/root         0 2016-03-16 17:45 ./usr/share/doc/nomad/docs/install/
-rw-r--r-- root/root      1812 2016-03-16 17:45 ./usr/share/doc/nomad/docs/install/index.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/internals/
-rw-r--r-- root/root      7681 2016-03-16 17:45 ./usr/share/doc/nomad/docs/internals/architecture.html.md
-rw-r--r-- root/root     10098 2016-03-16 17:45 ./usr/share/doc/nomad/docs/internals/consensus.html.md
-rw-r--r-- root/root      1889 2016-03-16 17:45 ./usr/share/doc/nomad/docs/internals/gossip.html.md
-rw-r--r-- root/root       580 2016-03-16 17:45 ./usr/share/doc/nomad/docs/internals/index.html.md
-rw-r--r-- root/root      5667 2016-03-16 17:45 ./usr/share/doc/nomad/docs/internals/scheduling.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/jobspec/
-rw-r--r-- root/root      4855 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/environment.html.md
-rw-r--r-- root/root     15707 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/index.html.md
-rw-r--r-- root/root      5869 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/interpreted.html.md
-rw-r--r-- root/root      3172 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/networking.html.md
-rw-r--r-- root/root      2060 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/schedulers.html.md
-rw-r--r-- root/root      5477 2016-03-16 17:45 ./usr/share/doc/nomad/docs/jobspec/servicediscovery.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/docs/upgrade/
-rw-r--r-- root/root      1266 2016-03-16 17:45 ./usr/share/doc/nomad/docs/upgrade/index.html.md
-rw-r--r-- root/root      2310 2016-03-16 17:45 ./usr/share/doc/nomad/docs/upgrade/upgrade-specific.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/examples/
-rw-r--r-- root/root       128 2016-03-16 17:45 ./usr/share/doc/nomad/examples/client.hcl
-rw-r--r-- root/root       231 2016-03-16 17:45 ./usr/share/doc/nomad/examples/server.hcl
drwxr-xr-x root/root         0 2016-03-16 17:45 ./usr/share/doc/nomad/intro/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/intro/getting-started/
-rw-r--r-- root/root      6577 2016-03-16 17:45 ./usr/share/doc/nomad/intro/getting-started/cluster.html.md
-rw-r--r-- root/root      2905 2016-03-16 17:45 ./usr/share/doc/nomad/intro/getting-started/install.html.md
-rw-r--r-- root/root      8050 2016-03-16 17:45 ./usr/share/doc/nomad/intro/getting-started/jobs.html.md
-rw-r--r-- root/root       639 2016-03-16 17:45 ./usr/share/doc/nomad/intro/getting-started/next-steps.html.md
-rw-r--r-- root/root      6230 2016-03-16 17:45 ./usr/share/doc/nomad/intro/getting-started/running.html.md
-rw-r--r-- root/root      3385 2016-03-16 17:45 ./usr/share/doc/nomad/intro/index.html.markdown
-rw-r--r-- root/root      1790 2016-03-16 17:45 ./usr/share/doc/nomad/intro/use-cases.html.markdown
drwxr-xr-x root/root         0 2016-04-14 13:00 ./usr/share/doc/nomad/intro/vs/
-rw-r--r-- root/root      1237 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/custom.html.markdown
-rw-r--r-- root/root      1434 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/ecs.html.md
-rw-r--r-- root/root      1158 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/htcondor.html.md
-rw-r--r-- root/root       816 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/index.html.markdown
-rw-r--r-- root/root      2110 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/kubernetes.html.md
-rw-r--r-- root/root      1748 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/mesos.html.md
-rw-r--r-- root/root      1644 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/swarm.html.md
-rw-r--r-- root/root      1842 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/terraform.html.md
-rw-r--r-- root/root       569 2016-03-16 17:45 ./usr/share/doc/nomad/intro/vs/yarn.html.md
drwxr-xr-x root/root         0 2016-04-14 13:00 ./var/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./var/lib/
drwxr-xr-x root/root         0 2016-04-14 13:00 ./var/lib/nomad/


+------------------------------------------------------------------------------+
| Post Build                                                                   |
+------------------------------------------------------------------------------+


+------------------------------------------------------------------------------+
| Cleanup                                                                      |
+------------------------------------------------------------------------------+

Purging /<<BUILDDIR>>
Not cleaning session: cloned chroot in use

+------------------------------------------------------------------------------+
| Summary                                                                      |
+------------------------------------------------------------------------------+

Build Architecture: armhf
Build-Space: 146308
Build-Time: 685
Distribution: stretch-staging
Host Architecture: armhf
Install-Time: 802
Job: nomad_0.3.1+dfsg-1
Machine Architecture: armhf
Package: nomad
Package-Time: 1533
Source-Version: 0.3.1+dfsg-1
Space: 146308
Status: successful
Version: 0.3.1+dfsg-1
--------------------------------------------------------------------------------
Finished at 20160414-1302
Build needed 00:25:33, 146308k disc space